A team of researchers has created a self-replicating computer worm that crawls around the web to attack applications powered by Gemini Pro, ChatGPT 4.0 and LLaVA AI.
Researchers developed the worm to demonstrate the risks and vulnerabilities of AI-enabled applications, particularly how links between generative AI systems can help spread malware.
In their report, researchers Stav Cohen of the Israel Institute of Technology, Ben Nassi of Cornell Tech and Ron Bitton of Intuit created the name “Morris II” after the original worm that wreaked havoc on the Internet in 1988.
w Zero-click worms unleashed on AI
The worm was developed with three key goals in mind. The first is to ensure that the worm can recreate itself. By using adversarial self-replicating messages that trigger AI applications to generate the original message, the AI will automatically replicate the worm each time it uses the message.
The second objective was to deliver a payload or perform malicious activity; In this case, the worm was programmed to perform one of several actions. From stealing confidential information to writing insulting and rude emails to sow toxicity and distribute propaganda.
Finally, the worm needed to be able to jump between hosts and AI applications in order to spread through the AI ecosystem. The first method targets AI-assisted email applications using recovery augmented generation (RAG) by sending a poisoned email which is then stored in the target's database. When the recipient attempts to reply to the email, the AI assistant automatically generates a response using the poisoned data and then propagates the self-replicating message through the ecosystem.
The second method requires the generative AI model to execute an input, which then creates an output that requests the AI to spread the worm to new hosts. When the next host becomes infected, it immediately spreads the worm beyond itself.
In tests conducted by the researchers, the worm was able to steal social security numbers and credit card details.
The researchers submitted their paper to Google and OpenAI to raise awareness about the potential dangers of these worms, and while Google had no comment, an OpenAI spokesperson said cabling that “they appear to have found a way to exploit fast injection type vulnerabilities by relying on user input that has not been verified or leaked.”
Worms like these highlight the need for more research, testing, and regulation when it comes to implementing generative AI applications.