The Threat of AI-Driven Identity Fraud in Financial Sector on the Rise, Study Reveals

Over 80% of senior fraud, risk, and compliance bank executives identify AI-driven fraud as a significant emerging threat. As synthetic and stolen identity fraud continues to plague financial institutions, a new study by Deduce and Datos Insights has revealed that the growing sophistication of AI-generated attacks is making it increasingly difficult for existing detection systems to distinguish between fraudulent and legitimate identities.

“At Deduce, we recognize the evolving nature of stolen and synthetic identity fraud poses a real threat to the financial sector. Our latest research with Datos reveals that AI amplifies these risks, creating complex and harder-to-detect stolen and synthetic identities. This new fraud threat is causing widespread increases in false positives as existing fraud solutions cannot differentiate AI-driven fraud from legitimate customers who receive frustrating incremental friction. By adopting innovative technologies and fostering industry-wide collaboration, we can stay ahead of fraudsters and safeguard the integrity of our financial systems,” noted Ari Jacoby, CEO of Deduce.

 Emerging AI threat 

As many as 81% of participants acknowledged AI-driven fraud as a new and significant threat, with many planning to invest in additional protective measures to combat synthetic identities.

Stolen and synthetic identity fraud remains a persistent issue for financial institutions in North America, with many struggling to accurately detect and measure synthetic identity attacks despite having detection systems in place.

AI-generated deepfakes 

Reports of AI-generated deepfakes successfully bypassing document verification processes highlight the weaknesses in current identity validation tools.

Inconsistent monitoring  and detection

A lack of consistent tracking and reporting capabilities across the financial industry is hindering efforts to effectively understand and tackle synthetic identity fraud.

Current methods for identifying stolen and synthetic identities are often insufficient, making it challenging for FIs to differentiate between fraudulent and legitimate identities.

Generative AI presents a serious risk to identity verification processes, potentially enabling the creation of more sophisticated and harder-to-detect synthetic identities.