BylinesCyber Crime & ForensicIdentity & AccessThreat Detection & Defense

Three Ways Employees Can Spot a Deepfake Scam

by Sadiq Iqbal, Evangelist & Cyber Security Advisor, Check Point Software Technologies, Australia/New Zealand

Deepfakes have surged into the consumer and commercial consciousness alike owing to their growing sophistication.

The ability to mimic a human at a higher quality is now much more possible than ever before. That’s because access to the AI tools used to create deepfakes is better and this – along with low-cost barriers to entry – means that convincing fakes can be deployed at scale.

Our research has found that within 30 seconds or less of you speaking to me on any topic, I or any other layman with the relevant AI tools could make a deepfake of your voice for any topic I wanted, for as long as I wanted at practically little to no cost. That is a game changer and the same thing is now true of video.

The number of deepfake attacks (successful ones at that) is phenomenal, According to a Sumsub Research report conducted last year, global deepfake incidents have surged tenfold from 2022 to 2023 with North America having the highest deepfake attack growth of 1740%.

For enterprises, a key concern is whether an executive can be convincingly deepfaked – or a ‘privileged’ employee fooled by that fake. Earlier this year, a diligent finance worker in Hong Kong was tricked into approving a US$25 million payment by an elaborate video conference with a deepfake of the company’s CFO.  He requested a Zoom meeting in response to a money transfer request and only sent the money after speaking with who he thought was the chief financial officer and several other colleagues on the same call, all of whom were deepfakes. The case brought into stark reality the evolution of the technology and its potential as a new threat vector for business scams.

The significant business impacts associated with the malicious deployment of deepfakes have people and businesses asking what they can do to protect themselves and their operations. How can they work out whether the person on the other end of a videoconference is real and not an AI creation?

At a high level, it requires people to be vigilant and to perform some common-sense checks. In all situations, people tend to weigh up what they’re seeing and make certain risk assessments and judgements. In the same way, people currently check the veracity of an email or its contents – cross-checking the sender ID, hovering over a URL or attached file, examining the style and grammar – they can benefit from applying the same type of approach to videoconferencing engagements today.

This triangulation of clues and risk factors is a kind of “multi-factor authentication” that we now need to perform more consciously in workplace settings.

Tips and Tricks
So, what are some of the checks that employees can perform today to detect a deepfake, or alert themselves to a more sophisticated scam attempt that makes use of deepfake technology?

The first is to perform a quick ‘liveness’ check – ask the person on the other end of the video to turn their head from side to side. This is effective today because the generative AI tools used to produce deepfakes don’t create a three-dimensional or 360-degree likeness of the person, they can only produce flat front-facing images. Side-on views of the head and face will not render or display properly. So, if employees are suspicious, they should ask the person to turn left or right, and if the face disappears, hang up.

Similarly, other natural human behaviours often observed on a videoconference – someone reaching up and scratching or touching their head, for example – will also not display properly with a deepfake.

This is effective today but may not be tomorrow. The rapid pace of generative AI development means that the tooling will get better at creating more realistic likenesses of people and their mannerisms, which could make a liveness check more challenging over time.

Employees may be able to pick up on additional clues that a videoconference with an executive isn’t what it seems.

A useful check when a videoconferencing link is received – or when joining a multi-party call – is to check whether participants are attending using a corporate-licensed version of that software. That is often obvious because companies use a ‘vanity URL’ – companyname.videoplatform.com – to show where they are calling from. Trust may be further established by communicating ahead of time – via a different method such as chat or email – how you and others will be joining the video conference. If you know ahead of time you’ll receive a videoconference invitation from a certain company domain, it’s another ‘factor’ that can be taken into account when determining whether or not to accept the meeting. If the meeting is initiated from a personal account, or someone joins unexpectedly and without explanation from a personal account, it may be a red flag – and warrant, at least, a question or two or some further checks.

A third useful strategy is to have agreed internal codewords that have to be checked out-of-band before certain actions – such as a money transfer – are performed. In the case of the Hong Kong worker, this would have meant messaging the CFO they thought they were talking to via a completely different channel, and asking, ‘What’s the word?’ What response they got back would quickly tell them whether or not the ‘CFO’ – and the request being made – is genuine.

Employees should continue to be cautious and concerned, employ all the tips at their disposal, and stay current with the evolution of AI technology, to deal with the threat of encountering a deepfake. Their efforts can be ably supported by organisations implementing cybersecurity solutions, including robust email protections, that can detect and prevent many malicious meeting invitations from being delivered to inboxes in the first place. Given the realness of the threat, it’s important to have well-rounded protections in place.

Sadiq Iqbal

Sadiq Iqbal, Evangelist & Cyber Security Advisor, Check Point Software Technologies, Australia/New Zealand

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *