Mitigating AI-Driven Cyber Threats Through an Olympic Lens
by Sharat Nautiyal, Director, APJ Security Engineering, Vectra AI
The upcoming Olympics and Paralympics Games are set to showcase the world’s best athletes; however, these highly anticipated sporting fixtures are also expected to become a focal point for another reason. A surge in cybercrime.
Individuals and organisations are continuing to ramp up their adoption of Artificial Intelligence and this has been further propelled by the arrival of Generative AI (Gen AI) tools like Chat GPT and MS Co-Pilot. In fact, these tools are best described as ‘search engines on steroids’ because of the speed at which they can process large volumes of data. It’s no surprise that threat actors are now targeting the new attack surfaces created by GenAI adoption which is giving more ways to infiltrate an organisation and exfiltrate data.
A not-so-fun fact – the number of reported incidents during the upcoming Paris Olympics could be as much as ten times greater than witnessed previously at the Tokyo event, where a staggering 450 million individual cyberattacks were reported. Based on these numbers we should be anticipating somewhere in the region of 3.5 billion individual cyberattacks throughout the Paris event. A global cybersecurity risk that is set to be unprecedented in terms of its magnitude and scale. As technology and security leaders, we need to be ready.
Understanding the Olympic Cybersecurity Risk and Why This Matters
Identity-based threats and email compromise are already pressing concerns for security professionals; however, the Paris Olympics will see these threats reach new levels. Cybercriminals are expected to use GenAI to create everything from fake travel documents and event tickets to accommodation and holiday offers to lure unsuspecting individuals.
With many employees using their work devices for managing personal tasks such as booking a flight or event ticket, they could be unknowingly putting their organisation’s security at risk. Further complicating matters, many employees use Microsoft 365 collaborative tools on their mobile devices – so any threat of business email compromises (BEC) or phishing has the potential to impact the entire enterprise ecosystem.
What makes the detection of these Olympic-branded attacks more challenging is that malicious cyberattack GenAI tools are widely available to hackers on the dark web. Users do not need to have the skill to create a macro in Microsoft Word or even be required to sign up and log in to produce and polish phishing emails at scale. These criminal GenAI tools will provide step-by-step instructions with multiple combinations of suggestions from AI and LLMs to craft convincing BECs that appear convincing and authentic. The impact on the Games will be widespread unless properly contained.
Using Behavior-Based AI Threat Detection to Protect Against Lateral Movement
Despite advancements in technologies and AI, one thing remains constant – the human element. Humans are fallible and threat actors know this and frequently exploit their vulnerabilities through phishing and social engineering campaigns to gain a foothold into their victim’s network.
While many breaches can be prevented with basic cyber hygiene tactics, most organisations continue to invest in protecting their network perimeter rather than focusing on much-needed security controls that can effect positive change to protect against the leading attack vector: lateral movement.
CISOs should consider investing in building a layered approach of not only preventative controls, or looking at known behaviors, but also understanding and mitigating unknown threats. These threats require visibility, content and controls, with strategic security partners able to provide significant support in these areas. Behaviour-based AI-driven detection is the key to catching unknown threats and attackers deploying new, evasive methods.
Promoting Safer Employee Behavior Using Relatable Threat Scenarios
Effective cybersecurity awareness campaigns consider the psychological aspect of human behaviour. They aim to engage users by addressing cognitive biases employing behavioral psychology principles and using relatable examples to promote safer online practices.
For example, just reminding employees about the threats of GenAI may not have enough impact on its own to create the desired awareness and behaviour change needed. However, if you provide context and real-life examples this quickly changes things. For example, “There have been a lot of cases recently where people have been caught out by Olympic-related scams like phishing emails or other fraudulent activities. Often when using their work devices, which has exposed their workplace to a cyber threat. It could be you the next time, so please be aware and take preventive steps.”
By training employees, users and customers to be aware of these biases, and simultaneously develop strategies for mitigating their effects, cybersecurity professionals can make more accurate judgements and decisions, and ultimately improve the security and resilience of their digital assets.
The Road to Success: Battling AI-Powered Cyber Threats in a Gen AI Era
The Paris Olympics may be a battle for sporting dominance, but it is AI which will be at the heart of the security battle this Olympics season. We are now living in a world where GenAI tools are widely available. Cyber attackers are working on developing AI-based capabilities to commit crimes faster, smarter, cheaper and with very little skill needed.
Taking the necessary steps to defend against the growing threat of AI-powered attacks within your organisation can help guard against costly long-term security breaches, protect organisations from evolving attacks, and ensure we are able to enjoy and celebrate significant events such as the Olympic Games.