Special FeaturesCyber Crime & ForensicDevice & IoTIdentity & Access

Deepfake Phishing: The Dangerous New Twist of an Age-Old Cybercrime

For years, phishing attacks have plagued the digital landscape. These deceptive attempts, where cybercriminals impersonate trusted sources to steal personal information, rely on social engineering to trick victims. Phishing’s effectiveness and adaptability have made it a persistent cybersecurity threat.

With the rise of deepfake technology, phishing is poised to become even more sophisticated and dangerous. These deepfakes can use artificial intelligence to create realistic audio or video forgeries, making it even harder to distinguish a legitimate message from a malicious one.

Deepfakes and Phishing: Are You Prepared for the Coming Wave?

Deepfakes burst onto the scene in November 2017 when an anonymous Reddit user released an algorithm that leveraged existing AI to create convincing deepfake videos. This code quickly spread like wildfire, becoming open-source on GitHub, a popular code-sharing platform.

Deepfakes are a double-edged sword. While they hold exciting potential for education and entertainment, their ability to create realistic forgeries raises serious concerns. Malicious actors can exploit this technology to spread misinformation, damage reputations, or launch sophisticated scams.

The real-world consequences of deepfakes are becoming alarmingly clear. In a recent Hong Kong case, an employee was tricked into transferring a staggering HK$200 million (USD$25.8 million) after a scammer impersonated a senior company officer in a deepfake video call. This incident highlights the potential for deepfakes to cause significant financial losses.

Deepfakes can also be used to scam consumers. In one instance, scammers used artificial intelligence to create a synthetic version of Taylor Swift’s voice. This voice, combined with existing footage of Swift, falsely offered free cookware sets. This tactic highlights a broader trend: Celebrities like Swift, media mogul Oprah Winfrey, entrepreneur Martha Stewart, actor Tom Hanks, and journalist Gayle King have all been targeted by deepfakes used to promote bogus products or scams.

In February, cybersecurity firm Tenable confirmed that scammers are indeed leveraging generative AI and deepfake technologies to create more convincing personas in romance scams and celebrity impersonations, particularly targeting older demographics. The worrying part is that online tools and tutorials are making it quite easy for scammers to map celebrity likenesses onto their webcams, blurring the lines between reality and deception. These scams often originate on platforms like Facebook, tricking victims into a false sense of security.

Deepfake cases are increasingly dominating news headlines almost every day, with no signs of slowing down. The current situation may just be a precursor to an even more dire scenario.

Guarding Against Deepfake Attacks

According to Recorded Future, open-source capabilities currently allow for pre-recorded deepfake generation using publicly available video footage or audio clips, such as interviews and presentations. Threat actors can use short clips (<1 minute) to train these models. However, acquiring, and pre-processing audio clips for optimal quality continues to require human intervention. In addition, more advanced use cases, such as live cloning, almost certainly require threat actors to bypass consent mechanisms on commercial solutions, as latency issues on open-source models likely limit their effectiveness in streaming audio and video.

A spokesperson from Recorded Future’s Insikt Group commented that executives’ voices and likenesses have now become part of an organisation’s attack surface. Hence, organisations need to assess the risk of impersonation in targeted attacks. Large payments and sensitive operations should use several alternate methods of communication and verification, such as encrypted messaging or emails, in addition to avoiding reliance solely on conference calls and VOIP.

Expanding on this point, the spokesperson elaborated, “Organisations, particularly in the media and public sector, should track instances of their branding or content being used to conduct influence operations. [They] should invest in multi-layered and behavioural malware detection capabilities in the event that threat actors are able to develop AI-assisted polymorphic malware. Sigma, Snort, and complex YARA rules will almost certainly remain reliable indicators for malware activity for the foreseeable future.”

As an added precautionary measure, it is imperative to meticulously scrutinise and thoroughly sanitise publicly accessible images and videos showcasing sensitive equipment and facilities. This is especially crucial for critical infrastructure and sectors deemed sensitive, including defence, government, energy, manufacturing, and transportation.

Genie Sugene Gan, Head of Government Affairs & Public Policy, Asia-Pacific, Japan, Middle East, Türkiye and Africa regions, Kaspersky, believes the increasing frequency of deepfake cases is a wake-up call, particularly exemplified by the Hong Kong deepfake case.

She highlighted that aside from maintaining good cybersecurity practices through tools such as Kaspersky Threat Intelligence, organisations and the public alike should fortify their ‘human firewall’ as well.

According to her, while cybersecurity measures are essential, they alone are insufficient amidst constantly evolving cyber threats. It has become imperative for people to educate themselves on cybersecurity threats and risks. This will ensure that people understand the nature of deepfake technology and how it works, enabling them to identify deepfake phishing attempts.

Chan-Wah Ng, AI/ML Research Lead at Acronis, echoed Gan’s sentiment regarding the escalating deepfake trouble:

The Hong Kong incident serves as a prime example of a situation where the victim lacked awareness regarding the potential for real-time video manipulation, leading to a failure to verify the authenticity of the content through alternative channels such as email or messaging.

Therefore, I advocate for prioritising education efforts aimed at employees or the public, shedding light on the capabilities of highly convincing deep fake technology.

This awareness would equip individuals with the knowledge that the person they are communicating with might not be genuine. With this realisation, individuals would become more vigilant during conversations, promptly recognising any anomalies, and prompting them to seek verification through questioning.”

Unfortunately, even the most robust cybersecurity systems can be vulnerable if human vigilance is lacking. Regular cybersecurity awareness training that focuses on identifying red flags in deepfake scams, such as unnatural voice patterns or inconsistencies in video editing, can empower people to exercise greater vigilance when they receive suspicious emails or calls. This training will also instil the habit of verifying with relevant parties before making a money transfer or credential sharing. By combining strong cybersecurity measures with a well-trained and informed workforce, we can significantly reduce the risk of falling victim to deepfake phishing scams.

Mohammad Al Amin Mohd Jahaya

Mohammad Al Amin bin Mohd Jahaya serves as a tech journalist at Asia Online Publishing, where he delves into a myriad of technology topics daily, ranging from data analytics to cybersecurity, AI advancements, and emerging technologies such as augmented reality and blockchain. His passion for exploring the intersection of technology and society drives his commitment to delivering insightful and engaging content to readers across various digital platforms. With four years of experience as a writer in digital and content marketing, Mohammad Al Amin draws upon his expertise to enrich his skills as a tech journalist. This unique blend of experiences allows him to provide insightful and comprehensive coverage of the ever-evolving technology landscape.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *