Language

14 Apr 2026in Identity fraud

The ABN AMRO Bank Case: Why a Face Match Doesn’t Equal an Identity Match

Henry Patishman

Executive VP, Identity Verification solutions

What happened

Dutch prosecutors say a man allegedly opened nearly 50 ABN AMRO bank accounts by overlaying his own face onto stolen passport identities and using those images in the bank’s remote onboarding flow. The submitted selfies were altered with deepfake techniques to make him resemble the passport holder.

How the scheme appears to have worked

Too many companies still discuss deepfakes as if the threat is a fully synthetic face appearing out of nowhere. In real onboarding fraud, the setup is often much simpler: real documents, real selfies, real humans — but manipulated just enough to pass weak controls.

The alleged attack combined two real things: stolen identity documents and a live person in front of the camera. The selfie was then digitally altered so the attacker’s face looked close enough to the passport photo to pass as the document holder.

This type of manipulation is often described as face morphing, where one real face is blended or adjusted to resemble another rather than replaced with a fully synthetic one. According to prosecutors, the resulting accounts may have been used as mule accounts for fraud or money laundering.

What makes cases like this difficult to catch is how convincingly multiple identity signals — the document, the selfie, or even the face match — appear to support the same identity claim.

Why a false identity can still pass

ABN AMRO hasn’t disclosed the exact verification setup behind this case, so the explanation here is only a likely reconstruction.

Remote onboarding usually works as a chain of checks: document verification, selfie capture, face comparison, basic liveness or spoofing checks, and a final decision. Each step produces signals — from document data to face match results — that are then used in the final decision on whether the evidence is strong enough to approve.

That can still end with the wrong result.

  • The document may look real

  • The selfie may show a live person 

  • The face may look close enough to the portrait

None of that, on its own, proves the applicant is the rightful holder of that identity. When those signals are validated one by one, but not tested together as a single identity claim, a false identity can still pass. This is what identity verification is increasingly becoming — a question of whether the underlying signals actually belong together.

It also shows why stolen documents remain so useful in onboarding fraud. Deepfake-style manipulation isn’t replacing identity theft yet. It makes stolen identity data easier to reuse in a remote flow. And when one plausible signal carries too much weight, the whole decision becomes easier to abuse.

Signal-level validity doesn’t guarantee identity validity

A case like this may still be detectable, but through a different layer of controls. A stronger system checks whether the full set of signals — the document, the capture, the face comparison, and the session context, — taken together, supports the same identity claim. This is what is increasingly referred to as identity signal integrity.

Signal source control. The first question is not only whether the selfie looks convincing, but whether the system can trust where it came from. Device attestation, capture metadata, source intelligence, and emerging provenance standards can help determine whether the image came from a trusted capture flow or entered the process already manipulated.

Consistency across signals. A face match shouldn’t carry the decision on its own. It needs to hold up against the document, the capture, and the rest of the application. A document can be genuine and still belong to someone else. A selfie can look plausible and still support the wrong claim.

Reuse detection. One application may look believable on its own, but dozens rarely do. Repeated use of the same face, device, capture setup, or submission pattern across different identities can expose what single-case checks miss.

Orchestration. Good systems do not let borderline results quietly accumulate into approval. They treat inconsistency as risk, apply additional checks where needed, and escalate suspicious cases before the decision is made. 

At that volume, repeated approvals start to look like a decision-layer issue rather than isolated false positives.

What financial institutions should review now

In cases like this, the system is fooled not by a single fake, but by a combination of real and slightly manipulated signals that appear consistent.

For banks and fintechs, this might be a reason to review not only face-matching performance, but the full decision layer around onboarding. Ask:

  • Can the system confirm that the document is both genuine and actually presented by the rightful holder? A real document is not enough if it belongs to someone else.

  • How well does the flow resist manipulation? That includes liveness, presentation attack detection, and checks for altered or injected facial imagery as well as screen replays.

  • Can repeated patterns be spotted across applications? The same face, device, capture setup, or submission behavior appearing under different identities should not stay invisible.

  • What happens to borderline cases? If the match is close but not convincing, or the signals do not fully line up, the flow should step up, slow down, or escalate.

  • Is the final decision based on one checkpoint or on connected evidence? Strong onboarding flows approve when the overall identity claim holds together.

💡 Reviewing your onboarding flow after cases like this?
Talk to Regula about how to strengthen document verification, biometric checks, and decision logic across the full identity journey.

Book Your Discovery Call

Let’s talk about making your ID verification faster, smarter, and fully integrated.

On our website, we use cookies to collect technical information. In particular, we process the IP address of your location to personalize the content of the site

Cookie Policy rules