Confronting Threat of Deepfakes in Financial Services
Manipulating Identities, Misleading Institutions, and Exploiting Vulnerabilities Across Digital Platforms

Three in four Singaporeans cannot identify deepfakes. This shocking statistic from a survey by the Cyber Security Agency (CSA) should cause grave concern, especially as cases continue to ruin people financially.
Once considered digital curiosities, deepfakes are now weaponised to manipulate identities, mislead institutions, and exploit vulnerabilities across digital platforms. In fact, the Deloitte Center for Financial Services estimates at least USD $23 billion in losses from synthetic identity fraud by 2030.
According to a trend report by Forrester, APAC users are heavily impacted by deepfakes largely due to the region’s rapid digital adoption, the widespread integration of digital services into daily life, and the challenges posed by its cultural and linguistic diversity. These factors make detection and defence far more complex.
Widening Areas of Attack
In Singapore’s financial sector, rapid digitisation—augmented by the Smart Nation initiative—has resulted in AI-powered tools swiftly becoming embedded in customer touchpoints. According to the 2024 Ping Identity Global Consumer Survey, 87% of consumers are now concerned about identity fraud, a statistic driven in part by rising fears over the misuse of AI.
While the adoption of these powerful tools has unlocked a new level of convenience for customers, it has also broadened the attack surface, creating more opportunities for cybercriminals to exploit vulnerabilities. In response, as fraudsters increasingly turn to advanced technologies like ArtificiaI Intelligence (AI) and deepfakes, some financial institutions may be tempted to counter these threats with AI alone.
While AI tools can be used to enhance threat detection and response, they are not a silver bullet. AI tools must work together with established cybersecurity protocols and context-aware strategies that are tailored to real-world scenarios. The imperative, then, is to find a healthy balance between AI’s advantages with human supervision and strategic relevance.
Rethinking Identity Management in the Fight vs. Deepfakes
In this environment, identity verification has to be more than just a regulatory checkbox. Addressing sophisticated fraud demands a fundamental reevaluation of identity management. Outdated systems dependent on fixed credentials or basic biometric methods will not be enough to withstand these technologically advanced threats.
To counter the rising threat of deepfake-driven fraud, APAC’s financial institutions are embracing multi-layered identity verification strategies that go beyond traditional defence methods. One of the first lines of defence is liveness detection, a technology designed to verify that a real, live person is present during authentication. By analysing subtle physiological cues and micro-movements, this method makes it significantly harder for fraudsters to spoof identities using static photos, videos, or synthetic avatars.
Another powerful layer involves verified credentials. This is where cryptographically secure digital IDs are stored in encrypted wallets. These secure credentials assist in verifying identity authenticity while reducing the likelihood of data breaches. To further enhance digital security, the ‘decentralised identity’ framework enables individuals to manage their own identity data instead of depending on centralised databases that are vulnerable to threats. This shift helps reduce the risk of large-scale data leaks and identity theft.
But organisations should not just depend on technology in the battle versus deepfakes. The most effective defence is a hybrid approach that combines AI with highly targeted tools and strategies. Multi-factor authentication (MFA) remains useful, but it cannot stand alone in an era where generative AI can mimic faces and voices for very believable deepfakes. This s where adaptive authentication steps in, continuously analysing behavioural signals (such as typing speed, device usage, and geolocation) to identify anomalies that could indicate fraudulent activity.
Identity and Access Management as Strategic Infrastructure
To effectively combat the rise of deepfake threats, financial institutions must begin treating identity and access management (IAM) not just as a compliance checkbox, but as a core element of their digital infrastructure. A strategic IAM approach is essential to safeguarding operational integrity and customer trust.
Key priorities for financial service providers include modernising identity verification systems, enhancing authorisation controls, and integrating verified credentials seamlessly throughout the customer journey. But what does that entail?
A robust defence against deepfakes starts with sophisticated identity verification systems that can effectively differentiate between genuine users and artificial impersonators. Another vital weapon in this fight is policy-based access controls (PBAC). PBAC offers dynamic and fine-grained authorisation, ensuring access is granted based on contextual factors such as user role, device, location, and transaction type. With verified credentials, users can securely share specific attributes (such as proof of identity or employment) without exposing unnecessary personal data. This approach significantly reduces opportunities for fraudsters to exploit identity blind spots and weak credentials through deepfake attacks.
The Battle Against Deepfakes Never Stops
The fight against deepfakes is a dynamic battle that requires constant vigilance and innovation. Technology alone is not enough. Human awareness is critical. Training employees to recognise deepfake attacks, such as subtle changes in communication style or deviations from standard protocols, can make them more vigilant and less susceptible to deception.
By leveraging advanced detection tools, fostering collaboration across teams, and adopting proactive security measures, organisations can mitigate the risks posed by deepfake media and maintain trust in a rapidly evolving digital environment.