Even when a presented ID document is legitimate, KYC systems still need to be sure if the document’s owner is indeed the person being verified. That question is answered through face matching.
But how exactly does face matching technology work? And what accompanies it in a typical ID verification flow? Read further to find out.
What is face matching?
Face matching (often called face verification) is a one‑to‑one comparison of two images tied to a claimed identity. In identity verification, that usually means comparing a selfie captured during the onboarding or login process with the portrait in a trusted identity document or a previously stored template.
A note on terminology
Face matching shouldn’t be confused with face identification, which scans a face against a large database to find a potential match, as opposed to a one-to-one check.
After image capture, a face matching solution converts the image to a string of numbers containing key facial features such as distances between landmarks, shape of contours, and patterns in texture. If the numbers generated based on the two images match closely enough, crossing a certain threshold, the system returns a positive result.
As for thresholds, it’s important to strike the right balance. A lower threshold increases the chance of accepting impostors (false accepts), while a higher threshold reduces that chance, but may reject legitimate users (false rejects).

Advanced face matching tools can identify a person despite minor changes in their appearance.
It’s worth mentioning that decisions can be wrongly influenced by external factors such as lighting, camera placement, and camera quality, among others. All these factors interact with skin color, age, and other attributes; that’s why good face matching software is made to also guide the user into a workable pose and correct environment.
Face matching as part of identity verification
A face match is often preceded by ID document scans, forming a comprehensive identity verification system. The system first processes the images of the ID documents, verifies their real presence, and identifies the document type.
Based on the document type, the software then extracts all the necessary information from the data fields, cross-validates it, and performs a series of authenticity checks on the document’s physical security features. The features may include, but are not limited to, holograms, optically variable inks (OVIs), multiple laser images (MLIs), watermarks, Dynaprint®, etc.
For ePassports and many eIDs, the system also reads the RFID chip in compliance with ICAO, verifying data groups and security objects, and extracts the chip portrait as a high-quality reference.
With the holder’s photo collected from the ID, the system now collects a live selfie and compares the two images. Even more images may be cross-checked against each other: in the case of eIDs, many systems also compare the selfie to both the ID portrait as well as the image in the chip.
An extra layer of defense
Before any comparison is made, it's also common to perform a liveness check, which validates that a real human being is present, as opposed to a printed photo, a video injection, replay, or an AI deepfake. This could be done in one of two ways:
Active liveness detection: Instructs the user to follow an on-screen prompt and perform a certain action (for example, turning their head).
Passive liveness detection: Does not require the user to do anything specific. Instead, the system quietly analyzes the face’s texture consistency and microexpressions to determine if it belongs to a living person.
Face matching process
A typical face matching algorithm can be broken down into the following steps:
Step 1 (Capture): A selfie or short video is captured on a mobile or web camera. If the user submits an ePassport or eID, the system also reads the main portrait stored in the RFID chip and, where available, secondary portraits (such as additional chip-stored images or ghost images printed on the document).
Step 2 (Image quality checks): Before any biometric checks, the software screens the selfie for sharpness, exposure, frontal pose, and occlusions.
Step 3 (Liveness detection): Then, the system checks that a real person is present—not a printout, mask, injection or replay.
Step 4 (Feature extraction): The face is passed through a neural network that produces a fixed-length vector, often called an embedding or template. This encoding is what the matcher compares, not the raw photo. The detection-alignment-embedding pattern is widely documented in verification pipelines used by industry and research.
Step 5 (Comparison): The live selfie embedding is compared to the trusted reference embedding. The scorer outputs a similarity value, which is the basis for the pass or fail decision (depending on the threshold).
Step 6 (Outcome): If the score is below the threshold because of blur, glare, or pose, the user can be prompted to recapture. If PAD (Presentation Attack Detection) or injection checks fail, the session is blocked. If scores are near the policy boundary, organizations may send the case to manual review depending on their assurance level and local rules. NIST SP 800-63A-4 describes how biometric comparison ties the applicant to the strongest piece of identity evidence within an IDV session.
Types of face matching
Below are the most common patterns of face matching found in real-life deployments:
Document-to-selfie (enrollment/KYC)
A live capture is compared to portraits extracted from the ID. For ePassports and many eIDs, this includes the image in the RFID chip, which provides a high-quality facial reference for verification. Printed portraits from the visual zone can be compared as well, either as the main reference or as a secondary check. This is the default pattern for remote onboarding, and is widely used at automated border control gates, where the live capture is matched to the chip portrait.
In-document portrait cross-checks
Many IDs carry more than one portrait: a primary photo in the visual zone, a portrait in the chip, and sometimes a secondary or “ghost” image as an MLI, hologram, or visible under special light. Comparing these against one another catches swaps or edited portraits before even comparing against the live selfie, and points out if a document has been tampered with.
Live-to-chip at eGates (kiosk flows)
At airports, the gate captures a face and runs a one-to-one match against the image in the chip as part of the document and traveler check. In some countries, it is also linked with remote check-in: passengers share their data and a selfie in advance, so when they reach a kiosk or eGate, the face match serves mainly as a fast confirmation.
A final word on face matching
Face matching is the step that links a human being in front of a camera to evidence that has already been validated. Without it, even the strongest document checks leave open the possibility of someone else using that document.
That’s why, when combined with liveness checks and strong document validation, face matching can block common fraud and make your KYC procedures predictably robust.