Generative AI (GenAI) has been popular with the public for over a year, but adopting a technology with such disruptive potential in the business world is taking longer. In fact, many organizations are still reluctant to introduce GenAI into their business models in any capacity, with security concerns being one of the main reasons for their apprehension.
The potential productivity benefits of GenAI are outweighed, in the minds of senior executives, by the danger of data getting into the wrong hands. But what if this refusal to formally adopt AI, or create a decisive policy on its use, is creating a culture of covert use of AI and is already putting sensitive data at risk?
We saw this same dynamic just a few years ago, with IT in the shadows.
Enterprise Architect for SnapLogic.
What is shadow IT?
Shadow IT describes technology, whether software or hardware, that is used in workplace practices or activities without the knowledge or approval of the IT team. This could be as simple as accessing secure workplace networks through an unsecured phone, accessing sensitive files through a personal laptop, or using a cloud service without oversight from central IT management.
For IT professionals, the term “shadow IT” should raise alarm bells. Unknown devices or software accessing organizations' data can lead to security vulnerabilities, data breaches, and violations of data regulations; all serious events that may compromise the integrity and reputation of any business.
In today's technology landscape, AI tools are the latest innovations that provide potential value at all levels of business, while representing the new concern for IT teams looking to keep their organizations risk-free. Without adequate measures, companies face a considerable challenge: ensuring that their employees use AI responsibly, without resorting to the type of practices we used to call “shadow IT.”
However, history has shown that with any major new technological advancement that promises substantial new capabilities and benefits, this type of unofficial use is likely to rear its head…
iPhones and GenAI: not so different, you and me
In many ways, the current atmosphere around GenAI is comparable to the iPhone boom of the late 2000s; the monumental launch that first defined shadow IT and its potential impacts.
During this time, companies discovered that an increasing number of employees were adopting this new technology, for which they were not adequately prepared. Previously, only a small minority of workers carried their personal laptops or early smartphones to the workplace, but the launch of the iPhone created significant demand for new mobile management practices.
When such policies were not implemented, employees would take it upon themselves to change the way they worked with their new devices, often redirecting mail and documents to their personal accounts so they could use their iPhone. Many organizations recognized the risks that arose from a wave of new technology that accessed sensitive data, but they also saw the benefits of enjoying a more accessible workforce on the go, and the iPhone essentially offers many of the capabilities of a laptop without the size and weight of a laptop. The biggest losers from the situation were the IT leaders who risked their organization's security by not addressing the issue at all.
GenAI in the workplace
Like the iPhone, GenAI has entered the mass-market consumer space. Although some companies refuse to acknowledge the new technology they use, many people are already using it to improve their work. Services like Grammarly and ChatGPT have become commonplace tools that have justified their status with demonstrated business value, and the workforce knows it.
The fact of the matter is that companies that fail to recognize and monitor the wave of AI tools and services already being adopted may be unprepared for the evolutions in the threat landscape it has wrought. Employees are seeing the benefits of using AI for productivity and are not shying away from using it in the workplace, whether authorized by the IT department or not.
In fact, SnapLogic's recent study on generative AI found that 40% of office workers surveyed had used GenAI for their work without disclosing it to their employer or colleagues. Combine this with a slightly more worrying figure: more than two-thirds say they do not have enough knowledge of AI for their role, and it becomes clear that a lack of training and education by organizations is not deterring employees from using it. GenAI in areas that may be putting sensitive data at risk.
The survey results also show an opportunity for companies to adopt an education policy and advocate for AI best practices among their workforce. Workers use GenAI because they can see the benefits it can bring them; 47% say they believe GenAI could save them 6-10 hours of work per week in the future. This is not only a risk that must be mitigated; It is an opportunity to leverage technology that provides measurable business value, when the right security measures are taken.
The smart business leader will see these parallels and learn from the mistakes of previous shadow IT examples, fostering an AI strategy that enables employees to harness the transformative productivity benefits of generative AI while avoiding the risks that could arise through unidentified or unapproved AI tools. practices.
Going forward
The key takeaway here is to not take a completely indifferent approach to the wave of GenAI tools that are available. Instead, IT departments should establish a consultative partnership with the rest of their business, providing much-needed guidance on how to interact with AI in a way that encourages safe use while maximizing productivity where possible. This should be combined with defined security barriers that allow employees to experiment without exposing the business to risk. The ultimate goal is to take inspiration from mass-market consumer AI and replicate its many productivity benefits in an enterprise environment, leveraging the organization's own data to simplify processes, improve accuracy, and make activities more user-friendly. , through the use of natural language. directions for accessing internal systems while complying with relevant regulatory and security policies, for example.
Creating an atmosphere of openness and curiosity will lead to visibility into any AI adoption that allows for proper assessment of its risks and benefits. By providing this, IT teams will be able to build trust within their organization while reducing the risk of what could potentially be a game-changer for productivity and efficiency.
We have presented the best cloud computing.
This article was produced as part of TechRadarPro's Expert Insights channel, where we feature the best and brightest minds in today's tech industry. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing, find out more here: