BylinesCyber SafetyIdentity & AccessThreat Detection & Defense

Strengthening Defences Against AI-Enhanced Identity Threats

by Jasie Fon, Regional Vice President of Asia, Ping Identity

If you thought deepfakes were only fooling people who lacked tech literacy, think again. A recent advisory by the Cyber Security Agency of Singapore (CSA) pointed to a case where scammers used deepfake technology to pose as the chief financial officer of a multinational firm and tricked a finance employee into paying them USD25 million.

This tactic of impersonating people in powerful positions is not new; last December a deepfake of Singapore’s senior minister, Prime Minister Lee Hsien Loong made the rounds on social media in which he purportedly endorsed a cryptocurrency investment scheme. The truly concerning thing is that now that artificial intelligence (AI) tools are available to any run-of-the-mill criminal, these deepfakes are about to get more common – and more difficult even for experts to spot, to boot.

According to a Ping Identity survey just released this month (May), 85% of Singapore businesses expect an increase in AI-driven security threats within the next year. More than half of the Singapore correspondents surveyed indicated that they are concerned about the increase in identity fraud this might bring.

One troubling trend driving this is AI being used to bypass even sophisticated identity checks. More and more, cybercriminals are turning to AI tools to tamper with voice ID verification systems at call centres, where the authenticity of a caller is confirmed by matching their voice to a known sample.

The rapid development of generative AI technologies has enabled cybercriminals to quickly create synthetic duplicates of an individual’s voice from just one high-quality recording. This technology is also used to improve the effectiveness of phishing attempts, using AI algorithms to create personalised and increasingly difficult-to-detect phishing emails.

The primary goal for cybercriminals’ use of AI to steal digital identities is the desire to access corporate networks containing valuable data that may be held for ransom or sold for huge sums of money. Industries that keep large amounts of personal information, such as banking and healthcare, are particularly vulnerable and frequently targeted for both identity theft and money.

Countermeasures against such well-crafted threats require continuous alert from both individuals and organisations. This entails checking the authenticity of digital communications on a regular basis and tracking the origin of all unsolicited inquiries. Furthermore, all encounters with deepfakes or hacking events should be reported to the relevant authorities.

The Critical Role of IAM Systems

Given the rise of AI-driven security threats, protecting digital identities becomes more challenging, particularly as the demand for transactions in the digital space grows.

Identity and Access Management (IAM) systems are essential for enhancing an organisation’s efficiency in identifying and mitigating identity fraud. These systems enhance security by incorporating multiple verification methods during user authentication. By monitoring IP addresses, geographic locations, and previous user activity for anomalies, IAM systems can detect potential threats. IAM also requires additional verification such as multi-factor authentication, which may involve sending a push notification to a user’s smartphone—an extra step that goes beyond simply entering credentials.

However, a more robust approach to enhancing security might be to avoid using credentials at all. This can be achieved through the use of passwordless authentication methods, which involves making one part of the authentication process a cryptographic function instead of using credentials, which can easily be stolen. As passwords often represent the weakest link in a security system, replacing them with more secure methods of verification can significantly improve safety.

Looking ahead, we are likely to see identity fraud using AI continue to grow, making us all more cautious about whether that new video we’re watching is real or fake. This type of fraud will increase to the point that tech experts will need to develop ever more sophisticated technology to verify content authenticity.

As cybercriminals become more experienced and creative with their tricks, security teams must stay vigilant and anticipate their innovations. But above all, the key is to keep your eyes open. The world of cyber threats is constantly evolving, and security teams need to constantly adapt to ensure the safety of their organisations’ digital environment.

Jasie Fon

Jasie Fon, Regional Vice President of Asia, Ping Identity

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *