BylinesCyber Crime & ForensicDevice & IoTIdentity & Access

The Rising Challenge of AI-Enhanced Identity Fraud

by Johan Fantenberg, Principal Solutions Architect – Asia Pacific & Japan at Ping Identity

Last December, a deepfake of Singapore’s then-Prime Minister Lee Hsien Loong caused a furore as it purported to attract people to a supposed investment opportunity. And Mr Lee isn’t the only prominent political figure – living or deceased – to have their likeness impersonated. During Indonesia’s February elections, the Golkar party ran a campaign that featured the late strongman President Suharto addressing users on TikTok in a bid to furnish its credentials and garner support. These are by no means outstanding instances of artificial intelligence (AI) misuse, as the democratisation and easy availability of AI apps and tools have made it easier for cybercriminals to scale their efforts.

In the face of that, existing risk mitigation measures are increasingly outflanked. A recent survey found 85% of Singapore firms anticipated an increase in attempts to compromise identities over the next year. The same survey indicates that 55% of firms have great anxiety about the growth of identity fraud due to AI.

The Impact of Generative AI on the Cybersecurity Landscape
It is remarkable how convincing deepfakes, including synthetic voices, are made from just a small amount of personal data. These capabilities enable more successful attacks because someone’s voice can be impersonated through a previous scam call. They also facilitate more complex phishing attacks because AI can use information from social media and other networks to create an email that is indistinguishable from a real one.

Unsurprisingly, cybercriminals are focusing on industries like banking and healthcare, where data breaches can reveal highly sensitive personal information. This data is utilised to create complete digital personas, facilitating identity theft and large-scale fraud. Many victims face serious consequences, including financial damage and the slog of identity restoration.

The more advanced generative AI becomes, the more complex and thus more challenging it is to identify false audio and visual information. The older generation, for example, is frequently an easy target for digital scams, and many of these are simply undetected due to the sophistication of the scams. Even the tech-savvy are now also susceptible as we’re now in an era where attacks turbocharged by AI capabilities are easily available to malicious actors.

The Role of Identity Protection and Management
The necessity of remaining vigilant and sceptical is fundamental. Individuals and organisations must critically assess and verify the legitimacy of information before acting upon it. That scepticism must inform their responses; even the slightest suspicion must trigger alarm bells and prompt people and organisations to flag it upwards. Awareness and education are crucial here as they form the base of a holistic security strategy – one which equips stakeholders to detect and respond to AI-facilitated scams as much as it gives organisations the tools to withstand the attempts of threat actors.

In terms of what solutions to adopt, the first port of call is to protect digital identities. Identity and Access Management (IAM) solutions play an important part in this by improving detection capabilities and implementing strong verification processes such as Multi-factor Authentication (MFA). These technologies aid in detecting anomalies and ensuring that extra proof of identity is obtained before granting access.

The shift to password-less authentication, which involves making a part of the verification process a cryptographic function instead of traditional credentials that can easily be stolen, is a helpful way to mitigate security concerns. This approach strengthens the security architecture and minimises the probability of credential compromise.

In addition, as AI advances, the regulatory landscape will also change. Governments and international organisations are looking to adopt new laws and policies to regulate AI effectively, presumably to properly mitigate the risks associated with the use of AI. These rules must allow for the creation of new technologies while at the same time providing a safeguard against their misuse, ensuring that the digital world will be a safe place for all users.

What to Expect in the Future
Looking ahead, the rise of AI-enabled identity fraud appears unavoidable, necessitating both increased scrutiny of multimedia materials and innovations in content verification technology. Keeping up with cybercriminals’ constantly evolving strategies will be critical for security organisations attempting to counter these increasingly sophisticated threats. Being diligent and proactive in identity protection tactics will be more important than ever in preventing the success of these AI-powered attacks.

Johan Fantenberg

Director, Product & Solution Marketing, Ping Identity

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *