New research from Zscaler has criticized companies for rushing to adopt AI tools too early, often overlooking their own cybersecurity.
The survey of more than 900 global IT decision makers found that 95% of organizations use generative AI tools like ChatGPT in their businesses, but 89% consider them risky.
The study reveals the reality of the current state of surveillance and cybersecurity, revealing that a considerable number of companies are exposing themselves to greater risks without thinking.
Companies do not protect themselves against GenAI
A third (33%) of companies analyzed had not implemented any additional security measures to protect against generative AI, although some have begun to explore the issue. Another quarter (23%) were not even monitoring GenAI usage.
Sanjay Kalra, vice president of product management at Zscaler, said: “With the current ambiguity surrounding their security measures, just 39% of organizations perceive their adoption as an opportunity rather than a threat. This not only jeopardizes the integrity of their business and customer data, but also wastes its tremendous potential.”
According to the research, smaller companies are more likely to perceive the use of generative AI as risky.
Fortunately, the research presents an opportunity for companies to slow down, step back and take a look. A fraction of the driving force for AI adoption comes from employees, and the majority comes from IT teams.
With seemingly little worker appetite for AI at this stage, companies should be able to slow down or even pause the deployment of AI tools to reassess security.
However, the clock is ticking because half (51%) of Zscaler respondents expect interest to increase substantially by the end of the year, leaving companies with a matter of weeks to adjust their processes.