AI Malicious Uses to Monitor in 2024: Recorded Future Report
Threat intelligence company Recorded Future today released the latest report from its threat research division Insikt Group, which reveals malicious use cases of artificial intelligence (AI) will most likely emerge from targeted deepfakes and influence operations. The report also found more advanced use cases such as malware development and reconnaissance will benefit from advancements in generative AI.
Recorded Future’s threat intelligence analysts and R&D engineers collaborated to test four malicious use cases of AI to illustrate “the art of the possible” for threat actor use. They tested the limitations and capabilities of current AI models, ranging from large language models (LLMs) to multimodal image and text-to-speech (TTS) models. All testings were undertaken using a mix of off-the-shelf and open-source models to simulate realistic threat actor access.
The key findings from the report titled Adversarial Intelligence: Red Teaming Malicious Use Cases for AI are:
Use Case #1: Using deepfakes to impersonate executives
-
Open-source capabilities currently allow for pre-recorded deepfake generation using publicly available video footage or audio clips, such as interviews and presentations.
-
Threat actors can use short clips (<1 minute) to train these models. However, acquiring, and pre-processing audio clips for optimal quality continues to require human intervention.
-
More advanced use cases, such as live cloning, almost certainly require threat actors to bypass consent mechanisms on commercial solutions, as latency issues on open-source models likely limit their effectiveness in streaming audio and video.
Use Case #2: Influence operations impersonating legitimate websites
-
AI can be used to effectively generate disinformation at scale, targeted to a specific audience, and can produce complex narratives in pursuit of disinformation goals.
-
AI can be used to automatically curate rich content (such as real images) based on generated text, in addition to assisting humans in cloning legitimate news and government websites.
-
The cost of producing content for influence operations will likely decrease by 100 times compared to traditional troll farms and human content writers.
-
However creating templates to impersonate legitimate websites is a significant task requiring human intervention to produce believable spoofs.
Use Case #3: Self-augmenting malware evading YARA
-
Generative AI can be used to evade string-based YARA (a tool aimed at helping malware researchers identify and classify malware samples) rules by augmenting the source code of small malware variants and scripts, effectively lowering detection rates.
-
However, current generative AI models face several challenges in creating syntactically correct code and addressing code linting issues and struggle to preserve functionality after obfuscating the source code.
Use Case #4: ICS and aerial imagery reconnaissance
-
Multimodal AI can be used to process public imagery and videos to geolocate facilities and identify industry control system (ICS) equipment – from devices, networks, controls and systems used to operate and/or automate industrial processes – and how the equipment is integrated into other observed systems.
-
Translating this information into actionable targeting data at scale remains challenging, as human analysis is still required to process extracted information for use in physical or cyber threat operations.
The full report provides further examples of the use cases and of how Recorded Future researchers tested each of the malicious use cases.
According to a spokesperson from Recorded Future’s Insikt Group:
“Executives’ voices and likenesses are now part of an organisation’s attack surface, and organisations need to assess the risk of impersonation in targeted attacks. Large payments and sensitive operations should use several alternate methods of communication and verification, other than conference calls and VOIP, such as encrypted messaging or emails.
“Organisations, particularly in the media and public sector, should track instances of their branding or content being used to conduct influence operations.
“Organisations should invest in multi-layered and behavioural malware detection capabilities in the event that threat actors are able to develop AI-assisted polymorphic malware. Sigma, Snort and complex YARA rules will almost certainly remain reliable indicators for malware activity for the foreseeable future.
“Publicly accessible images and videos of sensitive equipment and facilities should be scrutinised and scrubbed, particularly for critical infrastructure and sensitive sectors such as defence, government, energy, manufacturing, and transportation.”