Blurred Lines: Deepfakes Have Claimed Millions, and They’re Just Getting Started
Millions of dollars.
That’s how much cybercriminals have looted with the help of deepfakes, which are unsurprisingly making consumers worldwide worried even as they continuously overestimate their ability to spot these high-tech, Artificial Intelligence (AI)-created imitations.
But consumers shouldn’t be the only ones to worry about deepfakes; businesses should be just as worried because threat actors are increasingly targeting them using deepfakes—sometimes to surprisingly lucrative results.
Take the case of UK engineering firm Arup, which lost millions to fraudsters in February of this year after one of its finance employees was duped by deepfakes into transferring over USD $25 million. The ruse was as elaborate as it was plausible: The worker was set up in a video call with deepfakes of Arup’s Chief Financial Officer (CFO) and several staff members. Long story short, the unsuspecting worker obliged the CFO’s money transfer request—only to find out later the whole thing was digitally staged.
Nearly the same thing happened in Hong Kong a month later, when a financial employee was also tricked into transferring USD $262,000 (1.86 million yuan) to a cybercriminal’s account. This time, the ruse was not as elaborate—a simple video call with a deepfake of the employee’s boss—but was just as effective because the end goal was ultimately attained.
You Ain’t Seen Nothing Yet When It Comes to Deepfakes
Chillingly, this use of deepfake to perpetuate fraud is “just scratching the surface,” according to Jason Hogg, cybersecurity expert, executive-in-residence at Great Hill Partners, and ex-FBI Special Agent. And, indeed, the implications of deepfakes can run the gamut from annoying to downright damaging, making it imperative to raise awareness about these fakeries, noted Johan Fantenberg, Director at Ping Identity.
“In a world that’s increasingly digital, deepfakes have serious ramifications for people’s security and privacy and conversations about cybersecurity and digital identity are essential to raise awareness,” Fantenberg told Cybersecurity Asia (CSA) in an exclusive email interview. “Cybercriminals can utilise deepfakes in social engineering attacks to trick people into sending money, disclosing private information, or completing fraudulent transactions. We have also seen the use of deepfakes to spread disinformation, especially during elections…”
For Frederic Ho, Vice President of Asia Pacific at Jumio Corporation, it is equally alarming that cybercriminals seem to be getting better and more prolific at using deepfakes for nefarious purposes.
“What we’re seeing now is that deepfake scammers are upping their game, both in terms of sophistication as well as scale,” Ho shared with CSA. “As generative AI tools become increasingly accessible and sophisticated, scammers are using them to generate highly realistic deepfakes for fake news, social media posts, and other types of misinformation. These technologies have made it easier for individuals without specialised technical skills to create deepfakes—faster and cheaper than ever before.”
Ho also rued the advances in real-time processing and more efficient algorithms, which he says have enabled the generation of deepfakes almost instantaneously. These abilities, according to Ho are “enhancing the potential of deepfakes for misuse in live settings”—and that was precisely the case with Arup and the Hong Kong-based business who were both scammed with the help of deepfakes.
These same advancements in technology, incidentally, are making it extremely challenging to detect deepfakes and render them ineffective.
“Detecting deepfakes is increasingly challenging and realistic due to the rapid advancements in the underlying technology. Current deepfakes are highly accurate in imitating delicate human gestures, voice tones, and facial emotions, making it difficult to tell the difference between real and fraudulent content,” Fantenberg pointed out.
Ho shared the same sentiment.
“In the rapidly evolving digital landscape, cybercriminals are increasingly harnessing the power of generative AI to perpetrate sophisticated fraud schemes. This new breed of crime leverages advanced machine learning algorithms to create convincing fake identities and videos with more natural-looking expressions and movements, generate realistic but false documents, and even produce tailored phishing messages that can deceive even the most vigilant individuals,” he explained. “Cybercriminals have also become increasingly inventive, using AI and deepfake technologies to bypass facial recognition systems through a method called ‘camera injection.’ This technique essentially tricks the device’s camera into seeing individuals who are not actually there.”
Deepfakes Are Supercharging Fraud—and Everyone’s a Target
This is particularly alarming if the rest of Asia is like Singapore in that they overestimate their ability to spot deepfakes because it is not that easy—especially not now with multiple technologies widely available to make hyperrealistic deepfakes. More alarming, deepfake scams can now target a wider range of victims but with a higher chance of success since cyber attackers will keep targeting humans and systems indiscriminately and will just keep evolving.
“These deepfake capabilities will continue to mature, posing a serious challenge, especially to financial institutions and online businesses that are dependent on traditional KYC or eKYC processes,” Ho rued. “This is really where it’s fraud on steroids, as in the past it would take a fraudster more effort to create one fake identity, but now—with deepfake capabilities and AI—they have just automated the whole process.”
Fraud on steroids.
It’s a fitting description of the kind of fraud cybercriminals can commit with deepfakes. And Fantenberg has an equally fitting—and chilling—warning: AI-driven deepfake-based attacks will accelerate exponentially and cause users to question the validity and integrity of multimedia like video, images, and audio files.
That means every member of every member will need to be on high alert as everyone and anyone can potentially fall victim to a deepfake. Anything less and it could be Arup all over again… and again… and again.
So, this all begs the question: What can businesses do in the face of this threat?
Fighting Back: Is It Even Possible?
Being vigilant is a start, according to Fantenberg, and it must be practised by the individual members of the business—that is, the employees and management—and by the organisation itself. Individually, being vigilant means keeping an eye out for oddities in any video, checking the authenticity of digital communications, tracking the origin of all unsolicited inquiries, and reporting deepfake attempts to the proper authorities, among others.
Jumio’s Ho, on the other hand, emphasised the need to “stay informed while maintaining scepticism toward sensational content” as deepfakes are frequently used to generate sensational or controversial material.
“If something appears too shocking or out of character, take a moment to verify it before reacting or sharing,” Ho shared.
From an organisational standpoint, Ho points out there are technological solutions available to mitigate the threat of deepfakes. One of them, according to the Jumio executive, is liveness detection, which enables companies to determine and confirm the user’s physical presence behind an app. These tools, or at least the good ones, are rigorously tested to ensure they can foil advanced spoofing attempts.
Businesses also need to “take more proactive measures and put in place cutting edge technologies”—including multimodal biometric identity verification and behavioural analytics as they can be deterrents to cybercriminals using deepfakes.
“For hackers to trick these solutions, it would require an unimaginable investment into expensive, bleeding-edge technologies that could somehow exhibit lifelike gestures, natural lighting and shadow changes, and realistic reactions to the environment,” Ho explained. “Even if the investment is made, advanced liveness solutions are always evolving and will still be able to detect subtle differences. These technologies have sent fraud levels plummeting, as most fraudsters often abandon the process as soon as they learn that they are required to take a live selfie.”
But it might not be too long before cybercriminals can start outsmarting even the best anti-fraud solutions—something that could happen once the attacks become more sophisticated than the defences. This is why experts like Ho and Fantenberg are hoping the technologies against deepfakes can stay ahead of threat actors by improving existing deepfake detection technologies, developing new detection approaches, and more sophisticated detection algorithms.
Governments Need to Get Involved
As the battle against deepfakes continues, both Ho and Fantenberg anticipate a larger role for governments, whose role is essential to “continue influencing our path toward a more secure digital environment,” according to Ho. In particular, they will need to “enact new laws and regulations to control AI effectively in order to appropriately reduce the hazards connected with its application,” said Fantenberg, who also emphasised that “these regulations must permit the development of new technologies while also offering protection against their abuse.”
An example to this end is the AI Act passed by the European Union last December requiring those who create AI-made videos to watermark their content. Closer to home, the Cyberspace Administration of China (CAC) rolled out what is arguably the most comprehensive and substantial set of regulations with regard to deepfakes (also referred to as deep synthesis technology by the CAC) because “has been used by some unscrupulous people to produce, copy, publish, and disseminate illegal and harmful information, to slander and belittle others’ reputation and honour, and to counterfeit others’ identities.” Singapore, on the other hand, already has the Protection from Online Falsehoods and Manipulation Act, whose main objective is to counter false statements disseminated over the internet in The Lion City.
That governments need to get involved is indicative of just how deep the problem of deepfakes has become—big enough that, according to a Jumio survey, 75% of the global population is worried about deepfake’s potential influence on national elections (leading to South Korea banning deepfakes outright before its April 2024 elections and Singapore considering the same ahead of its polls).
Banning deepfakes entirely, which would be a drastic move, could be an avenue worth exploring as desperate times call for desperate measures. But is it even possible? The amount of technological investment alone, both from an organisational standpoint and a governmental perspective, would be mind-boggling—without any guarantee it will put a definitive end to deepfakes.
So, putting it another way, this deepfake problem isn’t going away—at least not anytime soon. As Hogg said, these virtual imitations are only scratching the surface, and that sounds like big trouble.