Financial institutions are under siege as artificial intelligence fuels a new wave of fraud. A recent report by identity verification firm Veriff reveals a troubling surge in online ID scams—one in every 20 verification attempts in financial services is now fraudulent.

With deepfakes and AI-powered deception on the rise, the cost is hitting both consumers and companies hard. According to Veriff’s latest “Future of Finance” report, identity fraud in the financial sector rose by 21% over the past year.

A 21% Yearly Spike in Identity Fraud

The study, which combines in-house data and survey responses from professionals and consumers, paints a picture of an ecosystem increasingly manipulated by deepfakes and AI-generated content.

Fraud has become personal. Over a third of U.S. consumers surveyed said they suffered financial losses that couldn’t be recovered. The damage is not limited to individuals.

One-third of fraud professionals report their firms lost between 3% and 5% of annual revenue to fraud, an unsustainable so-called “fraud tax.”

“Financial services organizations are natural targets for sophisticated fraudsters. But by embracing technological evolution in AI, biometrics, and identity verification (IDV), financial services leaders can protect their companies and their customers—and seize the opportunities the ever-evolving digital world will bring,” said Ira Bondar, Senior Fraud Group Manager at Veriff.

Among the most concerning trends is the rise of “authorized fraud,” where scammers trick victims into participating in identity verification, often by impersonating banks or officials.

Authorized Fraud Is the Fastest-Rising Threat

This form of fraud has grown alongside AI capabilities, making it easier for attackers to insert themselves digitally between the user and a financial institution.

You may also find interesting: Record Number of Britons Fell for Financial Frauds in 2024, but Total Loss Didn’t Increase

These adversary-in-the-middle attacks, whether physical or digital, enable fraudsters to seize control of legitimate accounts. Veriff found that authorized fraud now accounts for 1.5% of all fraud attacks in financial services, much higher than in sectors like gambling or HR, where it’s virtually non-existent.

AI Arms Both Sides of the Battle

The same technology that enables fraud is also being used to fight it. Nearly two-thirds (64%) of U.S. fraud professionals already deploy AI and machine learning tools in prevention efforts, with another 20% planning to adopt such tools within a year. In the UK and Brazil, similar adoption rates are observed.

Trust plays a central role. Over 82% of consumers say they won’t register with financial platforms they believe have weak fraud prevention systems. Biometrics, especially facial recognition, emerged as the preferred method of secure interaction for 38% of respondents.

Most companies have already taken action: 83% of financial firms now use some form of identity verification or biometric solution, and 81% plan to increase investment in such tools. Early adopters report a noticeable drop in fraud rates.

More Fraud, Smarter Defenses

A majority of fraud professionals, 89%, expect attacks to intensify in 2025. Malware, impersonation, authorized fraud, account takeovers, and document forgery rank among the top concerns. AI, while a driver of these threats, also remains their best hope for defense.

The report underscores that financial services are natural targets for organized fraud networks. With growing access to powerful AI tools, fraud-as-a-service has become a booming underground economy.

Still, Veriff insists the path forward is not entirely bleak. By doubling down on AI-driven ID verification and consumer education, firms can better safeguard their systems and restore consumer trust in the digital age.

This article was written by Jared Kirui at www.financemagnates.com.Retail FXRead More

You might also be interested in reading The investment world’s been rocked by a new kind of fund: Money guru JEFF PRESTRIDGE reveals if and how it can make YOU rich.