Language

17 Sep 2025in Identity fraud

New Survey: Deepfakes Are Reshaping the Threat Landscape in IDV

Henry Patishman

Executive VP, Identity Verification solutions at Regula

Since the beginning of the year, interest in deepfakes has surged. According to Google Trends, searches for “deepfake AI tool” are up 180%, “deepfake detection” by 160%, and “deepfake definition” by 150%. 

While the public is still trying to grasp what deepfakes are, fraudsters have already made them part of their toolkit—along with other methods, both common and unconventional—reshaping how identity fraud works. 

How are businesses responding to these threats? Are existing fraud prevention systems enough to detect and stop AI-driven attacks? Which tools still work—and which are outdated?

To explore these questions, Regula surveyed fraud prevention and financial crime professionals across four global markets— the US, Germany, UAE, and Singapore—representing companies in Aviation, Banking, Crypto, Fintech, Healthcare, and Telecommunications.

Here is what we found.

Key findings

  • The line between traditional and impersonation attacks has blurred: both are now common and equally active. 

  • Smaller companies tend to face fewer AI-driven fraud attempts, while larger enterprises are exposed to more frequent and sophisticated incidents.   

  • The finance-related sector is among the most affected by impersonation techniques, including deepfakes, biometric fraud attempts, and identity spoofing.

What are the mainstream fraud tactics today?

In our 2022 identity fraud survey, AI-driven attacks were just emerging, and detecting synthetic identities was a top priority. Fast forward to 2024, and the threat landscape has expanded. In addition to those risks, many companies reported fake and altered physical documents among the main attacks they needed to handle.

Today, these threats have been overtaken by more advanced and varied tactics, including:

A chart displaying the most common types of identity fraud companies are currently facing

Identity spoofing

A significant percentage of organizations (34%) reported identity spoofing—using photos, videos, or other media to impersonate someone—as a common threat. Although it’s considered a traditional fraud tactic, it’s still effective. It typically involves presenting printouts, video replays, or images displayed on a screen during selfie verification

Interestingly, this method is increasingly targeting medium-sized companies (31% of respondents) and large enterprises (39.7%), especially in the banking industry (34%). Here, spoofing is often used to open accounts linked to scams or mule networks. 

Geographically, businesses in the UAE and Germany (36%) reported the highest number of such incidents over the past year.

Biometric fraud

Using fake or stolen biometric data to bypass security is another major threat, reported by 34% of businesses. This type of attack includes the use of physical artifacts—such as fake fingerprints, silicone masks, or 3D face models—to deceive biometric sensors. These methods are commonly used for account takeovers and SIM swaps. 

Most of the affected companies (36%) are mid-sized and operate in healthcare (36%), finance-related and crypto sectors (both 35%). Geographically, businesses in Singapore (36%) suffered the most from biometric fraud incidents.

Deepfake fraud

Attacks involving AI-generated faces, voices, or videos to convincingly mimic or invent identities ranked among the top three threats (33%). Deepfakes are often used in presentation attacks to trick the system into believing a real person is in front of the camera. Typically, they are deployed to bypass biometric verification that relies on live video.   

According to the survey, 36% of companies affected by this type of fraud were large—most of them in the fintech (38.6%), aviation (37%), and banking (33%) sectors, where video-based Know Your Customer (KYC) processes are widely used. Among all countries, the UAE reported the highest share of companies encountering deepfakes (35%).

Other common fraud tactics

Identity spoofing, biometric fraud, and deepfakes now occur just as often as traditional methods like document fraud (30%), social engineering scams (30%), and synthetic identities (29%). The latter, once considered a “brand-new” and rising threat in 2022, is now a more familiar tactic used less frequently by fraudsters. 

Notably, most of these fraudulent activities target the onboarding stage of first-time customers. This makes the initial interaction between a user and a company the most vulnerable point—one that requires stronger protection. However, regular customers aren’t immune either. Their credentials and accounts remain prime targets as well.

quote

“Fraudsters are no longer breaking in through the back door—they’re walking through the front. The verification step itself has become the primary target. Criminals create fake but ‘clean’ identities that look legitimate from day one, making downstream fraud detection nearly powerless. Onboarding is now the battleground.”

Ihar Kliashchou Chief Technology Officer at Regula

[To be continued]

On our website, we use cookies to collect technical information. In particular, we process the IP address of your location to personalize the content of the site

Cookie Policy rules