BlogsArtificial IntelligenceCyber SafetyIdentity & Access

How Deepfake Scammers Almost Pulled Off a Half-Million Dollar Heist

They say if it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck. But as deepfake technology becomes more sophisticated and realistic, can we really trust our own eyes and ears anymore?

Earlier this month, a finance director at a Singapore-based MNC learned this the hard way when the “C-level executives” instructing him to transfer nearly half a million US dollars, who looked and sounded every bit the part, were actually using deepfake technology.

The Anatomy of a Deepfake Heist

The elaborate scheme unfolded over several days, with each interaction carefully orchestrated to deceive the finance director. Here’s a breakdown of the key events:

  • Initial contact: The finance director was first contacted via WhatsApp by an individual impersonating the company’s Chief Financial Officer (CFO), setting the stage for a supposed urgent restructuring project.
  • Lawyer introduction: He was then contacted by a fake lawyer who emphasised the project’s importance and the need for strict confidentiality, even leading him to sign a non-disclosure agreement.
  • Rescheduled video conference: A previously scheduled video conference was abruptly moved forward.
  • The deepfake deception: The finance director participated in a Zoom video conference where deepfake technology was used to impersonate the company’s CEO and other stakeholders.
  • The transfer: Under the guise of instructions from the fake lawyer and the deepfake executives, the finance director was directed to transfer US$499,000 to a local corporate bank account, before most of it was transferred to Hong Kong bank accounts.
  • Discovery and recovery: The scam was uncovered when the fraudsters requested an additional US$1.4 million, prompting the finance director to alert the bank. Swift action by authorities in Singapore and Hong Kong led to the successful freezing and recovery of the transferred funds.

What’s frightening (and also a bit fascinating) was that the entire operation involved a masterful blend of social engineering, deception, psychological manipulation and cutting-edge technology — the kind of elaborate con you’d expect to see in a Hollywood heist movie, only this time, the entire crime took place online.

Darren Guccione, CEO and Co-Founder of Keeper Security, shared his views on the issue with CSA, saying, “The recent case involving a Singapore-based finance director being duped by a deepfake impersonation of his CFO underscores the growing sophistication of cyber threats businesses and individuals face today. This scam is particularly concerning as it isn’t just a phishing email or a spoofed phone call – it’s an entire deepfake deception, orchestrated through artificial intelligence.

He added that deepfakes are no longer confined to celebrity impersonations or internet novelty. In the hands of cybercriminals, Darren believes deepfake technology can be used maliciously to create realistic video and audio simulations of executives that can fool even the most cautious employees.

Think You Can Tell a Deepfake Apart? Think Again

A lot of people probably think spotting a deepfake video would be easy, and they’d never fall victim to such schemes. But what do the numbers really tell us? According to a Jumio study, 60% of people worldwide believed they could spot a deepfake last year, up from 52% in 2023.

However, believing you can spot a deepfake is one thing; actually doing it is another. According to iProov, only 0.1% of people (at least the ones they tested) can accurately identify AI-generated deepfakes. In addition, many found that deepfake videos were more challenging to identify than deepfake images. And under pressure, especially when confronted by an authority figure like a CFO or CEO, a person’s judgment could be even more clouded.

Knowledge and awareness are of course very important. With many, particularly among older generations, having never even heard of deepfakes, and one in four company leaders having little to no familiarity with the technology, it’s clear that the threat remains invisible to much of the public.

Fighting Back Against Digital Deception

What immediate steps should organizations take to safeguard against deepfake threats? On this, Darren Guccione shared that fostering a culture of cybersecurity awareness across all levels of an organisation is crucial to preventing such attacks.

“Training should extend beyond email scams to include emerging threats like deepfakes and CEO impersonation. Security teams should train employees on the red flags of these scams, teach them to verify requests through secondary channels – especially for sensitive information or financial transactions – and create clear escalation procedures for suspicious communications,” he said.

Darren added that even if a scammer impersonates a senior executive convincingly, leveraging a Privileged Access Management (PAM) solution will enforce strict access controls, limiting the scope and impact of a scam. “It ensures that only verified and authorised users can access systems or initiate sensitive actions. PAM tools also have features like session monitoring and recording, which provide audit trails for security teams to detect suspicious activities and investigate incidents during the recovery phase.”

Pause, Think, Verify

What we can be sure of is that at the rate AI technology is advancing, deepfakes and similar manipulations are only going to improve and become increasingly indistinguishable from reality.

It’s worrying enough that detection tools are already struggling to keep up. If the battle between creators and detectors continues, the best we can hope for is that the gap doesn’t grow beyond reach.

At the end of the day, a critical layer of defence has to rest with the individual. Perhaps our most reliable compass lies in maintaining a cautious and considered approach to digital interactions, at all times. It pays to cultivate a habit of mindful scepticism, especially when requests involve valuable information or large sums of money.

A moment of hesitation, a second opinion sought, a direct verification. These simple steps may be our most potent weapons in navigating an increasingly deceptive digital world.

Syed Ahmad Hafez

In his role as Editor, Syed oversees the editorial content and daily operations of the editorial content and day-to-day news operations of AOPG’s IT portals, primarily Data Storage Asia, Disruptive Tech News and Cyber Security Asia. Syed started out his career in IT, where he was involved in IT projects subcontracted by companies such as Mesiniaga, Standard Chartered and Fujitsu. Having a background in engineering in his tertiary years as well as a penchant for language and translation, Syed has been able to pull together his varied background along with his wealth of copy-writing experience to, through informed articles and interviews, help readers stay updated with the ever-evolving world of enterprise IT.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *