Researchers created an AI worm that steals data and infects ChatGPT and Gemini

Programming Languages1121

A new AI worm is found to steal credit card information from AI-powered email assistants. A worm named Morris II was created by a group of security researchers that potentially infects popular AI models like ChatGPT and Gemini.

The created computer worm targets Gen AI-powered applications and demonstrates it against Gen AI-powered email assistants. It has already been demonstrated against GenAI-powered email assistants to steal personal data and launch spamming campaigns.

A group of researchers, Ben Nassi from Cornell Tech, Stav Cohen from the Israel Institute of Technology, and Ron Bitton from Intuit created Morris II, a first-generation AI worm that can steal data, spread malware, spam others through an email client, and spread through multiple systems.

This worm was developed and successfully functions in test environments using popular LLMs. The team has published a paper titled “ ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications” and created a video showing how they used two methods to steal data and affect other email clients.

Naming the AI worm after Morris, the first computer worm that rippled worldwide attention online in 1988, this worm targets AI apps and AI-enabled email assistants that generate text and images using models like Gemini Pro, ChatGPT 4.0, and LLaVA.

The researchers warned that the worm represented a new breed of “zero-click malware”, where the user does not need to click on anything to trigger the malicious activity or even propagate it. Instead, it is carried out by the automatic action of the generative AI tool. They further added, “The study demonstrates that attackers can insert such prompts into inputs that, when processed by GenAI models, prompt the model to replicate the input as output (replication) and engage in malicious activities (payload)”. Additionally, Morris II successfully mined confidential information such as social security numbers and credit card details during the research.

Conclusion

With developing ideas of using AI in cyber security, further tests and attention to such details must be prioritized before embedding AI to secure data and information.