AI Worms Are Crawling Up as New AI Parasites Invade Your Devices
As generative AI systems continue to advance, the realm of possibilities they offer expands exponentially. With entities like OpenAI’s ChatGPT and Google’s Gemini gaining sophistication, they are increasingly integrated into various applications, promising to streamline mundane tasks for users. However, as these Artificial Intelligence (AI) systems are granted more autonomy, they also become susceptible to exploitation and pose new security risks.
In a striking demonstration of the vulnerabilities inherent in connected, autonomous AI ecosystems, a group of researchers has engineered what they claim to be one of the first generative AI worms. These digital entities have the potential to spread from one system to another, potentially compromising data security or deploying malicious software in the process. Ben Nassi, a Cornell Tech researcher involved in the project, emphasises the gravity of this development, stating that it introduces a new frontier in cyber threats.
Named Morris II as a homage to the notorious Morris computer worm that wreaked havoc on the Internet in 1988, this AI worm represents a significant milestone in AI security research. In the research paper, Nassi and his colleagues, Stav Cohen and Ron Bitton, unveil the capabilities of Morris II. They illustrate how this AI worm can exploit vulnerabilities in generative AI email assistants, such as ChatGPT and Gemini, to pilfer data from emails and propagate spam messages, breaching security protocols in the process.
The research underscores the potential risks associated with the proliferation of Large Language Models (LLMs) that are increasingly becoming multimodal, capable of generating not only text but also images and videos. While generative AI worms have not yet been spotted in real-world scenarios (still theoretical), experts warn that they represent a tangible security threat in the next two to three years thus, demanding attention from startups, developers, and tech companies alike.
Why Experts Are Worried
As these generative AI systems gain autonomy, they are becoming a medium of susceptibility to exploitation by malicious actors.
These worms exploit the very capabilities that make AI assistants so useful. Leonardo Hutabarat, Head of Solutions Engineering at LogRhythm, describes AI worms as “a type of malware designed to target popular generative AI models such as ChatGPT to steal confidential data and send out spam.”
Unlike traditional worms that require user interaction, generative AI worms leverage a chilling tactic: “Adversarial self-replicating prompts.”
The potential consequences are dire. As Rik Ferguson, Vice President of Security Intelligence at Forescout, warns, “Our networks and critical national infrastructure have only ever become more connected.” Generative AI worms can exploit this interconnectedness, spreading through AI ecosystems and infecting numerous devices. These worms can potentially compromise critical infrastructure, steal sensitive data, or wreak havoc by disseminating fake news.
Should We or Not Should We?
A question still remains, though – Is generative AI a security boon or bane for organisations? Security experts Rik Ferguson and Genie Gan offer valuable insights, highlighting both the potential risks and secure implementation strategies.
Rik stresses that the key question isn’t “Should we use generative AI?” but rather “How can we do it securely?” He acknowledges the inherent risks associated with AI integration:
- Data Exposure: Rik warns against training models on sensitive data. “Make sure that your model is not being trained on data that you wouldn’t want to be exposed to in raw form,” he advises. Maintaining data neutrality is crucial, ensuring that any user with access to the AI has the authorisation to see the training data without explicitly exposing it.
- 4th Party Risk: The interconnected nature of AI ecosystems introduces a new layer of vulnerability. Organisations relying on external generative AI models (like those from Microsoft or OpenAI, for instance) need to assess the security posture of these 4th-party providers.
- Custom GPT Vulnerability: The growing popularity of custom Generative Pre-trained Transformers (GPTs) presents a unique security challenge. The Vice President of Security Intelligence at Forescout also highlights the susceptibility of these custom models, designed for non-security experts, to “prompt injection attacks.” These attacks can potentially expose sensitive information or lead to model misuse.
Genie Gan, Director of Government Affairs & Public Policy at Kaspersky, presents a more nuanced perspective. She acknowledges the potential for malicious use but also highlights generative AI’s potential to enhance security, particularly in the development of cybersecurity applications. According to her, generative AI can streamline the development of secure applications and systems, potentially reducing vulnerabilities.
However, while acknowledging generative AI’s potential benefits, Genie outlines three key risk areas that organisations should be aware of:
- Attack Planning and Targeting: Generative AI can assist attackers by generating detailed guides and advice for planning and targeting victims.
- Automated Malicious Code Creation: AI can potentially develop code and entire coding projects, empowering individuals with limited programming expertise to create malicious applications.
- Phishing and Spear Phishing: Generative AI’s ability to produce convincing and accurate text can be exploited for phishing attacks, potentially leading to significant cyber incidents.
Building a Secure Future with AI
So, how can we combat this emerging threat? Security experts like Leonardo emphasise the need for proactive developer practices. The first line of defence lies in “verifying inputs before processing them into AI models to prevent malicious code injection.” This approach mirrors traditional methods of safeguarding against SQL injection attacks. Essentially, developers need to scrutinise both the data entering the AI system and the data it generates. Additionally, he recommends “logging at the transaction level” to monitor for abnormal activity. Imagine a scenario where a hacker sends a malicious email to an AI-powered auto-response system. Transaction-level logging would document the sender’s information and the destination of the response, enabling swift identification and investigation of suspicious activity.
Beyond developer vigilance, Rik puts into focus the importance of user awareness. As AI assistants become more prevalent, users need to remain vigilant against potential threats. “Sophisticated threat detection systems” can play a crucial role in safeguarding against these novel cyber attacks.
This means that user awareness also plays a critical role. Organisations need to educate their employees about the potential for generative AI-driven phishing attacks and empower them to identify and report suspicious activity.
A Future of Promise, But Vigilance Is Key
The exploration of AI’s potential is a double-edged sword. While it holds immense promise for revolutionising various aspects of our lives, the research on generative AI worms serves as a stark reminder of the inherent risks involved.
Ultimately, the responsibility lies with both developers and users. On one hand, developers must prioritise secure design principles and continuously evaluate their AI models for vulnerabilities. Users, on the other hand, need to understand the limitations of AI assistants and remain vigilant about potentially malicious interactions. By approaching AI with a balanced perspective, acknowledging both its potential and limitations, we can harness its power for good and minimise the risk of AI morphing from a helpful tool into a malicious threat. The future of AI is not predetermined, and the choices we make today will determine whether it becomes a force for progress or a source of potential harm.