Warning: AI Now Attacks AI in Self-Spreading Worm
In a groundbreaking study, a team of scientists has unveiled the creation of the first-of-its-kind malicious AI worm capable of autonomously spreading among generative AI agents, paving the way for potential data theft and spam distribution. This breakthrough heralds a new genre of cyberattacks that could be executed within interconnected, autonomous artificial intelligence ecosystems.
The researchers from Cornell Tech, including Ben Nassi, Stav Cohen, and Ron Bitton, devised a worm dubbed Morris II, in homage to the original Morris computer worm which wreaked havoc on the internet in 1988. Their experiments demonstrated how such a worm could be employed to launch attacks on AI-based email assistants to pilfer data from emails and disseminate spam, thereby circumventing certain security measures in systems like ChatGPT and Gemini.
The study places particular emphasis on hostile self-replicating queries that compel an AI model to generate a new query in its response. This method bears resemblance to traditional attacks such as SQL Injection and Buffer Overflow, the researchers noted.
To showcase the worm’s capabilities, the researchers created an email system capable of sending and receiving messages using generative AI, connecting to ChatGPT, Gemini, and an open-source Large Language Model (LLM) named LLaVA. They identified two methods of system exploitation: utilizing a text-based self-replicating query and embedding a self-replicating query within an image.
The research underscores that generative AI worms represent a new security risk of concern to startups, developers, and tech companies alike. Although generative AI worms have not yet been detected in the wild, security experts believe the risk of their future emergence is significant.
The researchers have reported their findings to Google and OpenAI, highlighting the need for the development of more robust security systems and cautioning developers against the use of malicious inputs. Google declined to comment on the study, while a representative from OpenAI acknowledged the ongoing work to enhance the resilience of their systems against such attacks.
Some protective measures against potential AI worms already exist, such as employing traditional security approaches and ensuring human participation in the decision-making processes of AI agents. Experts stress the importance of developing secure applications and monitoring to prevent such threats.