Imperva Urges Businesses to Prepare for AI-Driven Cyber Threats
Imperva, a Thales company (@Imperva), the cybersecurity leader that protects critical applications, APIs, and data anywhere at scale, warns that organisations need to be aware and step up protection against increasingly prevalent AI-generated cyber threats and bots (software applications that perform repetitive tasks over a network).
Imperva’s recent Bad Bot Report found a distinct rising trend in simple bad bots. These bots accounted for 39.6% of bad bot traffic in 2023, compared to 33.4% in 2022 and 26.3% five years ago. The growth of simple bad bots reflects the increasing use of AI by cybercriminals, who often deploy the technology to create simple yet effective bot scripts for attacking organisations.
AI is revolutionising the world, enhancing efficiency and innovation across industries by automating complex tasks and providing deep data insights. However, this technological marvel also introduces new risks, particularly in cybersecurity.
Malicious actors can leverage AI to create sophisticated cyber threats and bots that can bypass traditional security measures, launch large-scale automated attacks, and mimic human behaviour with alarming accuracy. AI, specifically generative AI, has the potential to produce a new generation of less technical threat actors in writing automation scripts and code to fuel all types of bad bot activity, including account takeover attacks, identity theft, and financial fraud activities like carding.
For more advanced threat actors, AI used for image recognition is now being used in more sophisticated automation attacks to bypass checks and controls like CAPTCHA challenges, which historically were able to weed out bot-based automation from real humans interacting with a website or web application. Research shows that AI can solve CAPTCHA challenges more accurately today than humans.
In APAC, the banking and finance services sector is especially under siege. According to Imperva’s research, the industry has the second-highest proportion (79.61%) of advanced bot traffic due to the potential for high financial gain.
Targeted account takeover and carding-style attacks are prominent against banks throughout the region, along with prankster-style attacks that can be costly disruptions.
Recently, a bank in APAC suffered a bot attack that exploited its SMS notification API, which is used to inform customers about transactional activities on their accounts. The attackers managed to send over a million SMS notifications to random international mobile numbers, chalking up about US$600,000 of bogus SMS charges for the bank before the activity was discovered and stopped 14 days later.
“AI has become a vital tool for cybercriminals, who use it in various nefarious ways. One significant factor is the accessibility of open-source AI, which cybercriminals can adapt without safety guardrails and restrictions. This allows them to generate boundless harmful
results from the misuse of the technology,” said Reinhart Hansen, Director of Technology at Imperva. “Leveraging AI to formulate new attack variants from existing known application vulnerabilities or to overcome cybersecurity controls like CAPTCHA accelerates the ongoing cat-and-mouse game between threat actors and organisations.”
Imperva advises organisations to have a solid defence-in-depth strategy and approach to protecting their digital presence and their most valuable asset ̶ customer data.
Fundamentally, they need to:
- Define and enact a comprehensive application security strategy that protects the data behind the applications that broker access to it;
- Pursue a secure-by-design application development mindset, embracing a DevSecOps culture; and,
- Ensure operational application of protection controls like DDoS protection, advanced bot mitigation, web application firewall, and API security, and have them work in a coordinated way across their security
As AI continues to advance, organisations must remain vigilant and adopt proactive security strategies to defend against these evolving threats, ensuring that its potential dangers do not overshadow the benefits of AI.