More needs to be done to address the lack of skills and resources around AI integration and security, Maria Markstedeter, CEO and founder of Azeria Labs, told the audience at the recent Dynatrace Perform 2024 conference in Las Vegas.
To combat the risks posed by new innovations such as AI agents and composite AI, security teams and data scientists must improve their communication and collaboration.
Having experienced the frustrations of lack of resources due to his experience reverse engineering ARM processors, Markstedter believes that better collaboration and understanding is needed to minimize the threats posed by AI integrations.
“You cannot find vulnerabilities in a system that is not fully understood”
The increasing size and complexity of data processed by AI models is pushing the limits of what security teams are capable of modeling threats, especially when security professionals lack the resources to understand them.
New attacks and new vulnerabilities “require you to understand data science and how AI systems work, but also at the same time [have] a very deep understanding of security, threat modeling and risk management,” says Markstedter.
This is especially true when it comes to new multimodal AI systems that can process multiple data inputs, such as text, audio, and images, at the same time. Markstedter notes that while unimodal and multimodal AI systems differ greatly in the data they can process, the general call-and-response nature of human-AI interaction remains largely the same.
“This transactional nature is simply not the silver bullet we were hoping for. This is where AI agents come in.”
AI agents present a solution to this highly transactional nature by essentially having the ability to “think” about their task and arrive at a unique end result depending on the information available at that moment.
This poses a significant and unprecedented threat to security teams, as “the notion of identity and access management needs to be re-evaluated because we are basically entering a world where we have a non-deterministic system that has access to a multitude of data.” and applications and is authorized to perform non-deterministic actions.
Markstedter maintains that because these AI agents will need access to internal and external data sources, there is a significant risk that these agents will receive malicious data that might otherwise appear non-harmful to a security tester.
“This processing of external data will be even more complicated with multimodal AI because now malicious instructions do not have to be part of the text of a website or an email, but can be hidden in images and audio files.”
But it's not all bad news. The evolution of composite systems that combine multiple AI technologies into a single product can “create tools that give us a much more interactive and dynamic analytical experience.”
By combining threat modeling with composite AI and encouraging security teams to collaborate more closely with data scientists, it is possible to greatly mitigate not only the risks posed by AI integrations, but also improve the skills of security equipments.