BylinesIdentity & AccessThreat Detection & Defense

The Rise of Deepfake Deception: Why Biometrics Need a Security Upgrade

by Dominic Forrest, Chief Technology Officer, iProov

Deepfake

The surge in online services and platforms has given today’s organisations a crucial but challenging responsibility: ensuring that individuals are who they claim to be online. Legacy authentication methods, which could in the past be relied on to gatekeep any sensitive information, are now struggling to keep up with how unpredictable the cybercriminals and their tactics have become. The fast-paced technology evolution has been paralleled by the growth of cybercriminal networks, which exploit these advancements with increasing sophistication and speed. Nowadays, the rise of cybercrime-as-a-service (CaaS) and the ease with which one can access a wide array of malicious tools and services have enabled them to launch advanced attacks faster and at a much larger scale.

The Advantages of Biometrics

For cybercriminals bent on lucrative gains, financial institutions are prime targets. What’s concerning is that traditional authentication methods are no longer sufficient in the face of more sophisticated threats. For example, despite their longevity, passwords repeatedly demonstrate their inadequacy for high-risk use cases as they are prone to being forgotten, shared, or stolen through social engineering. Similarly, mobile and hardware tokens, often used as a method of multi-factor authentication (MFA), are also susceptible to loss, theft, or compromise.

Unlike legacy authentication methods, facial biometrics provide a far more secure method for organisations to verify users, particularly when matched against a government-issued ID (which typically includes a facial photo) during processes such as onboarding, enrolment, and other remote identity verification scenarios. A person’s facial features are unique to each individual meaning they can’t be easily shared, stolen, or easily compromised – until generative AI enters the scene.

The Growing Threat of Deepfakes

With deepfake technology, almost anyone can digitally insert an AI-generated image or video into a video stream with unnerving precision and plausibility. This has led to an alarming increase in the number and intensity of identity verification-based attacks in recent years. Notably, iProov, the biometric authentication technology company responsible for providing authentication for Singpass, reported a dramatic 704% rise in face swap attacks between the first half of 2022 and the first half of 2023.

One of the most widely recognised incidents that garnered global attention involved a global engineering company – which lost approximately $25 million when scammers utilised deepfake technology to impersonate the group’s Chief Financial Officer and trick an unsuspecting employee into transferring the amount to bank accounts in Hong Kong.

And it seems that no one is exempt from having their likeness used in these scams. Even Singapore’s Senior Minister, Lee Hsien Long was once again wrongly depicted promoting investment products for the second time in less than a year.

Regulatory Efforts in The Asia-Pacific Region

In light of the rising threat, governments across the Asia-Pacific region are taking steps to regulate deepfakes, but their approaches are still fragmented:

  • China: Prohibited production of deepfakes without user consent and mandates clear identification of AI-generated content. Additionally, they removed face-swapping apps Zao and Avatarify from app stores in 2019 and 2021 due to privacy concerns.
  • South Korea: Made it illegal to distribute deepfakes that could “endanger public interest.”
  • Australia: Plans to introduce measures like encouraging tech companies to label and watermark AI-generated content.
  • Southeast Asia (Thailand, Philippines, Malaysia, Singapore): These countries have enacted personal data protection laws, which could help mitigate some deepfake exploitation.

Despite these efforts, a more comprehensive and unified approach, which emphasises both prevention and public awareness, is needed across the region.

AI and the Future of Biometric Security

Even taking into account all of the aforementioned factors, facial biometrics is still a highly reliable remote identity verification solution for financial institutions, when it is augmented with a real-world component like a government-issued ID and advanced liveness detection capabilities. With liveness detection, organisations can verify in real-time that an online user is a genuine individual and is authenticating in real-time.

However, a word of caution is that not all biometric solutions are equipped with science-based liveness detection capabilities. While many solutions do provide some degree of presentation attack detection, in the current market, most are still unable to identify digitally inserted deepfake attacks.

As AI technologies continue to advance and the creation of realistic but deceptive content – be it in audio, image, or video formats – becomes increasingly prevalent, these capabilities will become vital for distinguishing truth from falsehood. Furthermore, the pace at which AI is evolving necessitates the need for constant monitoring of the threats, which should be tested against these detection tools on an ongoing basis to ensure maximum protection against the escalating risks posed by AI-generated attacks.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *