FS-ISAC Provides Guidance as Deepfake Poses New Threats to Financial Institutions
Report Includes First-of-Its-Kind Taxonomy of Deepfake Risks, Threat Scenarios, Controls, and Mitigations Specific to the Financial Services Sector
FS-ISAC, the member-driven, nonprofit organisation that advances cybersecurity and resilience in the global financial system, has announced that its Artificial Intelligence Risk Working Group has published “Deepfakes in the Financial Sector: Understanding the Threats, Managing the Risks,” a first-of-its-kind Deepfake Taxonomy designed to help senior executives, board members, and cyber leaders address emerging risks posed by deepfakes.
Deepfakes—synthetic media generated using advanced Artificial Intelligence (AI)—have become increasingly sophisticated, enabling threat actors to impersonate executives, employees, and customers to bypass conventional security measures. By exploiting the human element of trust that underpins financial transactions and decision-making processes, deepfakes allow cyber criminals to defraud financial institutions and their customers, steal money and data, and sow confusion and disinformation.
FS-ISAC outlines financial institutions’ risks, including information security breaches, market manipulation, direct fraud against customers and clients, and reputational harm from disinformation campaigns. According to a recent report, the projected losses from deepfake and other AI-generated frauds are expected to reach USD $40 billion in the US alone by 2027, making it imperative for institutions to take decisive action.
“The potential damage of deepfakes goes well beyond the financial costs to undermining trust in the financial system itself,” said Michael Silverman, Chief Strategy & Innovation Officer at FS-ISAC. “To address this, organisations need to adopt a comprehensive security strategy that promotes a culture of vigilance and critical thinking to stay ahead of these evolving threats.”
FS-ISAC Report Aims to Raise Awareness of Deepfakes
While the threats posed by deepfakes to financial institutions are significant and evolving, the taxonomy of deepfake threats outlined in “Deepfakes in the Financial Sector: Understanding the Threats, Managing the Risks” helps financial firms determine the threat categories of greatest risk to them and the controls to implement to mitigate those risks.
The path forward lies in strengthening existing controls and processes and educating employees and customers. In coming weeks, detailed guidance for the technologists who implement security measures will be published in two subsequent documents.
Hiranmayi Palanki, Distinguished Engineer at American Express and Vice Chair of FS-ISAC’s AI Risk Working Group, said, “Addressing deepfake technology requires more than just technical solutions—it also demands a cultural shift. Building a workforce that is alert and aware is crucial to safeguarding both security and trust from the potential threats posed by deepfakes.”
“Deepfake technologies are advancing very quickly, but our known controls can mitigate a lot of the risk,” said Lisa Matthews, Senior Director, Cybersecurity Compliance, at Ally Financial and member of the AI Risk Working Group. “The taxonomy we’ve developed allows firms to build a holistic and methodical approach that is flexible enough to adapt as the technologies advance.”
Download the paper here.