Artificial intelligence is already changing professional working patterns in almost every industry. It has the power to dramatically reduce the time we spend on routine tasks and allow us to think more strategically about our everyday professional lives.
The same is true for the IT and cybersecurity sector: at ISACA, our survey of business and IT professionals in Europe found that nearly three-quarters (73%) of the companies we surveyed reported that their staff use AI at work.
The key issue with AI, however, as transformative as it may be, is that we need to make sure we are using it responsibly and safely. After all, LLMs are trained on data that is often sensitive, and we need proper guardrails in these programs so that mind-bending doesn’t affect the integrity of our work. Only 17% of the organizations we surveyed have a formal, comprehensive AI policy in place that outlines the company’s approach to these issues and provides best practices for its use, despite the fact that employees are using AI at work.
Director of Global Strategy at ISACA.
AI is changing the threat landscape
At the same time, cybercriminals are also gaining access to AI and using it to strengthen their businesses and criminal capabilities, making their threats more compelling and effective than ever before. This is not just a threat to individuals, but to businesses as well. Businesses are interconnected organizations with networks of suppliers and professional relationships – when one is breached, all organizations on the network are at risk.
The recent outage of CrowdStrike’s IT service highlights how vulnerable businesses are if they suffer even a single IT failure or cyberattack. When one digital supply chain service provider is affected, the entire chain can collapse, leading to large-scale disruptions – a digital pandemic. An unauthorized update, the unfortunate result of a lack of foresight and expertise, unleashed chaos across a range of critical industries, from aviation and healthcare to banking and broadcasting.
Sometimes these incidents are due to unintentional errors when updating software, and sometimes they are the result of a cyberattack. But the irony is that cybersecurity companies are also part of the supply chain, and those same companies struggling to establish cyber resilience can also become victims, affecting service continuity.
Cybersecurity professionals are well aware of this fact: when we asked our respondents about the potential for generative AI to be exploited by malicious actors, 61% of respondents were extremely or very concerned about this happening. When comparing this to our survey data from last year, sentiment has hardly improved.
Training and capacity building are key to long-term resilience
AI is being used in two ways: malicious actors are weaponizing the technology to develop more sophisticated attacks, and in response, cyber professionals are using it to keep up with the changing threat landscape and better detect and respond to those threats. Employees know they need to keep up with cybercriminals, improve their skills, and truly master AI, but when we asked our respondents how familiar they are with AI, nearly three-quarters (74%) were only somewhat familiar or not very familiar at all.
The CrowdStrike incident has highlighted the need for a more robust and resilient digital infrastructure, and the rise of AI will only make cyber threats more significant. It’s important that as an industry we invest in upskilling and training to avoid similar crises in the future, and advances in technologies such as AI could be the key to working more efficiently. Appropriate protocols need to be put in place well in advance to act quickly when attacks and service disruptions occur and minimise damage and disruption. But this isn’t possible without people with the skills to set up bespoke security frameworks and ensure everyone involved is trained to follow them.
In order for companies to protect themselves and their partners in the long term, while also seeing the benefits of using AI, they must have the right skills to be able to identify new threat models, risks, and controls. AI training in the cybersecurity sector is much needed: at the moment, 40% of companies do not offer training to employees in technology roles. In addition, 34% of respondents believe they will need to increase their knowledge of AI in the next six months, and in total, an overwhelming 86% of respondents believe this training will be necessary in the next two years.
By taking an AI approach that prioritizes training and comprehensive workplace policies, both businesses and employees can be confident that they are harnessing the potential of AI and keeping pace with cyber threats as they evolve in a safe and responsible manner, protecting both the company itself and all other businesses within its broader network.
Introducing the best IT infrastructure management service.
This article was produced as part of TechRadarPro's Expert Insights channel, where we showcase the brightest and brightest minds in the tech industry today. The views expressed here are those of the author, and not necessarily those of TechRadarPro or Future plc. If you're interested in contributing, find out more here: