Artificial intelligence has quickly become a cornerstone of modern business, driving innovation and efficiency across all industries. However, as companies increasingly turn to AI to handle sensitive tasks, they are also exposing themselves to new security vulnerabilities.
Companies integrating AI into their operations means that AI entities are becoming more autonomous and gaining access to more sensitive data and systems. As a result, CISOs are faced with new cybersecurity challenges. Traditional security practices, designed for human users and conventional machines, are not sufficient when applied to AI. It is therefore vital that companies address emerging vulnerabilities if they want to avoid security issues arising from uncontrolled AI integration and protect their most valuable data assets.
Offensive Research Evangelist at CyberArk Labs.
AI: More than just machines
Each type of identity has a different function and capability. Humans typically know how to best protect their passwords. For example, it seems fairly obvious to anyone that they should avoid reusing the same password multiple times or choosing one that is too easy to guess. Machines, including servers and computers, often store or manage passwords, but they are vulnerable to breaches and do not have the ability to prevent unauthorized access.
AI entities, including chatbots, are difficult to categorize when it comes to cybersecurity. These non-human identities manage critical enterprise passwords, but they differ significantly from traditional machine identities such as software, devices, virtual machines, APIs, and bots. AI is therefore neither a human identity nor a machine identity; it is in a unique position. It combines human-guided learning with machine autonomy and needs access to other systems to function. However, it lacks the judgment to set boundaries and avoid sharing sensitive information.
Investments are increasing, but security is lagging behind
Businesses are investing heavily in AI, with 432,000 UK organisations (16%) reporting that they have adopted at least one AI technology. AI adoption is no longer a trend but a necessity, so spending on emerging technologies is expected to continue to increase in the coming years. The UK AI market is currently worth over £16.8bn and is projected to grow to £801.6bn by 2035.
However, rapid investment in AI often outpaces identity security measures. Companies don’t always understand the risks posed by AI. As a result, following security best practices or investing enough time in securing AI systems isn’t always a priority, leaving these systems vulnerable to potential cyberattacks. Additionally, traditional security practices, such as access controls and least privilege rules, aren’t easy to apply to AI systems. Another problem is that with everything they already have in place, security professionals are struggling to find enough time to secure AI workloads.
CyberArk’s 2024 Identity Security Threat Landscape Report reveals that while 68% of UK organisations report that up to half of their machine identities access sensitive data, only 35% include these identities in their definition of privileged users and take the necessary identity security measures. This oversight is risky as AI systems, loaded with outdated training data, become high-value targets for attackers. AI breaches could lead to the exposure of intellectual property, financial information and other sensitive data.
The threat of cloud attacks on AI systems
Security threats to AI systems are not unique, but their scope and scale could be. LLM systems, which are constantly updated with new training data from the company itself, quickly become prime targets for attackers once deployed. Since they must use real data and not test data for training, this updated information can reveal valuable confidential corporate secrets, financial data, and other sensitive assets. AI systems inherently trust the data they receive, making them particularly susceptible to being tricked into divulging protected information.
In particular, cloud attacks on AI systems enable lateral movement and jailbreaking, allowing attackers to exploit a system’s vulnerabilities and trick it into spreading misinformation to the public. Cloud account and identity breaches are common, with many resulting from stolen credentials, causing significant damage to major brands in the technology, banking, and consumer sectors.
AI can also be used to carry out more complex cyberattacks. For example, it allows malicious actors to analyze each of the permissions tied to a particular role within a company and assess whether they can use that permission to easily access and move around the organization.
So what’s the next sensible step? Companies are still in the early stages of integrating AI and LLMs, so establishing strong identity security practices will take time. However, CISOs can’t afford to sit back and wait; they must develop proactive strategies to protect identities from AI before a cyberattack occurs or new regulations come into effect requiring them to do so.
Key steps to strengthen AI security
While there is no silver bullet for AI security, there are certain measures that businesses can implement to mitigate the risks. More specifically, there are some key actions CISOs can take to improve their AI identity security posture as the industry continues to evolve.
• Identifying Overlaps: CISOs should make it a priority to identify areas where existing identity security measures can be applied to AI. For example, leveraging existing controls, such as access management and least privilege principles, wherever possible can help improve security.
• Safeguarding the environment: It is critical for CISOs to understand the environment in which AI operates in order to protect it as efficiently as possible. While it is not necessary to purchase an AI security platform, it is vital to protect the environment in which AI activity takes place.
• Building an AI Safety Culture: It’s difficult to encourage all employees to adopt identity security best practices without a strong AI security mindset. Involving security experts in AI projects means they can share their knowledge and experience with all employees and ensure everyone is aware of the risks of using AI. It’s also important to consider how data is processed and how the LLM is trained to encourage employees to think about what’s involved in using emerging technologies and be even more careful.
The use of AI in the enterprise presents both huge opportunities and unprecedented security challenges. As we navigate this new landscape, it’s becoming clear that traditional security measures are insufficient for the unique risks posed by AI systems. The role of CISOs is no longer simply to manage conventional cybersecurity threats; it now involves recognizing the distinctive nature of AI identities and protecting them accordingly. Therefore, businesses must ensure they invest time and resources in finding the right balance between innovation and security to keep up with the latest trends while protecting their most valuable assets.
We have introduced the best phone with artificial intelligence.
This article was produced as part of TechRadarPro's Expert Insights channel, where we showcase the brightest and brightest minds in the tech industry today. The views expressed here are those of the author, and not necessarily those of TechRadarPro or Future plc. If you're interested in contributing, find out more here: