Navigating Asia Pacific’s Deepfake Dilemma
Deepfakes Are Becoming Increasingly Worrying
Many of us in Asia Pacific have only begun to take deepfakes seriously in the past year.
This might stem from the fact that many of us have witnessed just how pervasive and persuasive it can be—especially with Artificial Intelligence (AI) in the mix. The technology’s reach is fuelling major societal challenges, enabling increasingly sophisticated scams that exploit trust and emotional vulnerability. An example is deepfake romance scams, where a Hong Kong case saw victims losing over USD $46 million to “love.”
Worryingly, it has also become increasingly commonplace to see ministers and celebrities ‘endorsing’ financial schemes. In Singapore, Senior Minister Lee Hsien Loong flagged the case when fraudsters manipulated a video to show him advocating an investment product with ‘guaranteed returns.’ In the Philippines, scammers are using deepfakes of local tycoons for investment fraud, while in Malaysia, fraudsters have exploited the voice and likeness of popular singer Siti Nurhaliza for similar scams.
Deepfakes have permeated so many aspects of life, with people being exposed to them on social media, and their reach extending into financial services and public sectors. This trend is particularly concerning, with elections approaching in Singapore and the Philippines next year. It’s also prompted Singapore to ban deepfakes of candidates during the election period.
Given the potential repercussions, deepfakes can disrupt power dynamics and gradually undermine trust in our economies and governments.
We see evidence of this in a recent study by Jumio, which revealed that 72% of consumers worry daily about being fooled by deepfakes into sharing sensitive information or money. This figure rises to an alarming 88% in Singapore. Furthermore, 67% of global consumers doubt their banks’ ability to combat deepfake-based fraud, and 75% are ready to switch providers over inadequate fraud protection.
But, while most people are already worried, are organisations taking the threat seriously enough?
Compounding the issue, deepfakes are rapidly evolving. The technology is becoming more adept at deceiving our eyes and ears. How will this impact our current security standards? Crucially, how can organisations not only protect themselves but also restore consumer confidence in an era where seeing is no longer believing?
The Rise of Sophisticated Deepfake Tactics
Before tackling the threat, we first need to understand the lay of the land. AI is now more accessible than ever, and scammers are fully aware of its opportunities. Already, they are utilising the latest AI tools to elevate both the sophistication and scale of fraud—all at unprecedented speeds and low costs. The aforementioned fake videos of Singapore’s senior minister, with their disturbingly realistic voice cloning and lip-syncing techniques, serve as stark examples.
In this context, concerning statistics about deepfaked politicians emerge. While 83% of people in Singapore are worried that AI and deepfakes could influence upcoming elections, a surprising 60% believe they could easily identify a deepfake of a politician. This misplaced confidence is particularly troubling, given 66% of them would still trust political news they see online, despite the risk of encountering deepfakes. As we look to the future, the increasing power of easily accessible AI tools capable of producing realistic deepfakes raises the risk of misleading the public about their politicians.
The implications extend beyond governments; newer deepfake technologies could also undermine our financial systems. This includes the mass production of fake identities to create synthetic personas, such as combining real credentials with fabricated images. A growing trend also involves using camera injection techniques, which effectively tricks a device’s camera into perceiving people who aren’t actually there.
Innovative Solutions in the Face of the Growing Threat of Deepfakes
Fearful of deepfake scams, consumers are demanding change. A significant majority of global consumers (60%) call for more AI regulation to address the issues around deepfakes and generative Al, while 69% want stronger cybersecurity measures as confidence in banking protection wanes. However, with regulatory trust varying globally, the private sector must step in and do its part.
For financial institutions, a vital step in strengthening security measures is ensuring that the right person is in front of the screen during transactions. One of the most effective tools for this is liveness detection, which enables them to verify and confirm the user’s physical presence behind the camera. Cutting-edge liveness detection techniques undergo rigorous testing so that they can effectively combat the most sophisticated spoofing attempts, including deepfakes.
Assessing additional fraud risk signals to detect anomalies and suspicious transactions throughout the customer journey adds an extra layer of security. These include checking if a user’s email and phone number have been used to open multiple accounts in a short period of time, as well as verifying the user’s location via IP address. It is also important to increase the frequency of monitoring activities such as whether accounts have been recently accessed from new locations or devices.
Another effective tool is predictive analytics. According to Jumio analysis, 25% of fraud is interconnected, either carried out by fraud rings or individuals who exploit shared information or credentials to open new accounts on banking sites, e-commerce platforms, sharing economy sites and more.
Predictive analytics help to tackle this issue by screening and identifying suspicious individuals or interconnected fraud patterns, thereby strengthening security, enhancing trust, and ultimately fostering a secure environment for both users and regulators. Predictive analytics is exactly what Bank Negara Malaysia is currently exploring to detect fraudulent transactions via its National Fraud Portal.
Constant Vigilance: Staying Ahead in the Deepfake Era
Businesses have made commendable strides in enhancing their defences against deepfakes and fraud, including the implementation of stricter authentication measures, moving away from one-time passwords, and bolstering fraud surveillance. However, more can be done in the face of rapidly advancing deepfake attacks.
For individuals, the key advice is to stay alert and well-informed. This is vital for everyone, especially considering that 60% of global consumers (and 77% in Singapore) still overestimate their ability to detect deepfakes. We are all susceptible to misleading content, and it is important to remain cautious.
For instance, deepfakes are often used to produce provocative or controversial material, so exercising caution toward such content is paramount. If you encounter something that seems excessively shocking or out of character, take the time to research it further through official sources to verify its authenticity before reacting or sharing.
For businesses to counter the rise in deepfakes and cyber deception, more effective technological solutions are needed. Incorporating multimodal, biometric-based verification systems is imperative. These technologies are key to ensuring that businesses can protect their platforms and their customers from emerging online threats, and are significantly stronger than passwords and other traditional, outdated methods of identification and authentication.
As deepfakes continue to evolve and infiltrate various aspects of our lives, both individuals and organisations must adopt a proactive stance. By fostering a culture of vigilance and leveraging advanced technologies, we can collectively enhance our resilience against the risks posed by misinformation. Ultimately, these efforts are integral to preserving trust in our digital interactions and safeguarding the integrity of our societies.