To onboard new customers and meet Know Your Customer (KYC) requirements, many companies now rely on remote identity verification that combines document authentication with biometric checks. While this approach is cost-effective and convenient, it also needs strong defenses against sophisticated fraud tactics.
One of the most concerning methods is the biometric video injection attack, used by fraudsters to impersonate someone or conceal their true identity.
This post explains what video injection attacks are, the common forms they take, and the safeguards businesses can deploy to stop them.
What is a video injection attack?
The online identity verification (IDV) flow typically includes document authentication and selfie verification stages. The selfie check confirms that:
The user is genuine.
The user’s claim is legitimate—the presented ID belongs to them.
To bypass biometric checks, fraudsters use various methods. One is the presentation attack, where tools such as masks, photos, or mannequins are used. Another, more advanced method, is the video injection attack.
Fraudsters’ goals may vary. Most often, they try to impersonate someone by presenting a stolen ID card, or to conceal their true identity—for instance, to avoid detection as a person on a watchlist. In concealment scenarios, fake or altered IDs may also be involved.
In a video injection attack, fraudsters try to trick the verification system by bypassing the webcam or smartphone camera, which is meant to capture data from the real world. If successful, the attacker feeds a fraudulent video stream into the system, mimicking a live capture. This stream may include fabricated content, such as deepfakes—realistic videos that don’t reflect reality.
Video injection attacks target the selfie capture stage, where the user’s live photo is compared against their ID portrait.
This makes video injection attacks more complex—yet also more effective—compared to simpler presentation attacks, where fake content (such as a printout or a photo on a screen) is shown directly to the camera. With injection, the fake video is seamlessly inserted into the pipeline.
Now let’s look at the components of the attack—what helps fraudsters control the stream and what content they inject.
What tools do fraudsters use for injection attacks?
Before transmitting false data to the verification system, fraudsters must first gain and maintain control of the session, taking over the real-time feed. To achieve this, they exploit a variety of methods. Interestingly, many of these tools are widely used for legitimate purposes, which makes them highly accessible.
Virtual cameras
A virtual camera is software used for live streaming, video conferencing, and presentations. It allows a user to create multiple video inputs on one device in addition to the built-in camera. Each virtual input can broadcast its own feed such as a prerecorded video, computer graphics, or a screen capture. These tools are popular among bloggers and online event organizers.
Fraudsters exploit this by renaming a virtual camera to look like a physical one, or by disabling all real cameras and setting the virtual one as the default. They then use it to inject fraudulent content during verification.
Smartphone emulators
Smartphone emulators are commonly used in software engineering, particularly for mobile app development. An emulator fully mimics the functionality of a real Android or iOS device, making it possible to develop and test applications without needing physical hardware.
Attackers install and run IDV apps inside the emulator as if it were a real phone. Beyond simulating the operating system, an emulator can also simulate hardware components such as a camera. As a result, the IDV system is tricked into believing it is interacting with a real device with real sensors.
Subscribe to receive a bi-weekly blog digest from Regula
Malicious JavaScript code
This method works only in browser-based verification sessions—whether on a PC or mobile device—because it exploits how web browsers like Chrome, Safari, and Firefox handle video capture. In these cases, the user’s camera and microphone are accessed through the browser via APIs written in JavaScript. Normally, this ensures the session is genuine.
However, skilled fraudsters can inject malicious JavaScript into that environment, intercepting the video feed and replacing it with fraudulent content before it reaches the verification server.
Video sticks
Attackers can also use hardware devices, namely USB-based video sticks. In legitimate use cases, these devices capture video for streaming, recording, or screen sharing. For instance, a video stick connected to a TV can stream video from a smartphone.
Fraudsters, however, misuse this capability to connect to a PC where the verification session is transferred to another device that broadcasts a fake video feed instead of a live webcam stream.
But what exactly do they present to the system? Let’s break it down.
Common types of video injections
For biometric verification, video injections are designed to imitate a real user performing the check. To accomplish this, fraudsters disable or hijack the web or mobile camera and replace the live feed with a prepared video stream.
Here are the main types of injection attacks involving legitimate, altered, and fully fabricated identities:
Video replays: Real identity
In this attack, fraudsters inject a pre-recorded authentic video. This could be a recording from a previous verification session or a clip prepared specifically for the fraud attempt. The recording is streamed into the session as if it were live.
Since the content is authentic, replayed videos can bypass basic IDV checks if the system fails to detect that it’s a duplicate or not happening in real time.
Deepfake overlays: Altered identity
To impersonate victims, fraudsters often use AI to generate deepfakes—realistic fake videos of a person’s face. They may overlay these faces onto their own video feed or animate a victim’s photo into a lifelike video.
The challenge is that deepfakes can mimic natural facial movements like blinking or nodding, making them appear “live.” For example, criminals may take a photo from social media and use AI to create a video of that person speaking and moving in real time.
If selfie verification relies on scripted prompts, deepfakes can be especially effective, since attackers can generate videos that perform the required actions on demand.
Synthetic video streams: Fake identity
Instead of impersonating a real person, attackers may create entirely synthetic identities. In this case, the video feed shows someone who doesn’t actually exist, paired with forged documents in the same name.
Synthetic identities are often used in banking fraud, where “ghost” customers can open accounts to obtain loans or launder money. For example, in early 2025, Vietnamese police uncovered the country’s first case of AI-powered biometric fraud. A 14-member gang allegedly laundered around $38 million by generating fake facial scans from short videos of recruited account holders. Authorities tied the scheme to 1,000 bank accounts, which were later frozen.
Mixed injection attacks
In practice, few attacks are “pure.” Fraudsters often combine techniques—for instance, injecting a real video while overlaying a deepfaked face, or altering frames of a stolen video to blur the line between replay and deepfake.
All these variations pursue the same goal: to trick the verification system into accepting a fraudulent identity as genuine.
How to detect and prevent video injection attacks
Given the variety of tools, techniques, and the creativity of fraudsters, a multi-layered defense strategy is essential. Here are the key components to include in a biometric verification system:
Advanced liveness detection
In remote scenarios, liveness detection is the primary defense against impersonation and synthetic identities. It proves the presence of a live person in front of the camera. Active liveness detection requires users to perform random actions during verification, such as blinking, smiling, or turning their head. Since the sequence is random and must be done in real time, it’s much harder for attackers to inject a pre-recorded or AI-generated video.
Video feed integrity checks
Video injections typically begin at the hardware level, where attackers attempt to hijack the genuine video feed. For this reason, it’s essential to ensure the integrity of data sent from a user’s camera to a company’s server through video source control.
This can be achieved with encrypted communication channels, virtual camera detection, and video feed monitoring for timing inconsistencies or other anomalies.
How vulnerable an app is to injection attacks also depends on the operating system it runs on. iOS, for example, tends to be more secure by default because Apple controls both the hardware and software. On the other hand, Android offers more flexible customization due to its open ecosystem and the variety of device manufacturers, but this flexibility may also introduce more security risks.
Deepfake detection
To address deepfake threats, biometric verification systems can incorporate machine learning models trained on large datasets of real and fake content. These models flag when a video is likely manipulated by identifying anomalies such as unnatural blending at facial edges, inconsistent lighting or shadows, or distorted eye reflections. Suspiciously precise timing in active liveness checks, for instance, when a user responds instantly to prompts—can also be a red flag.
Multi-factor verification
While obvious, this is crucial to emphasize: biometric checks should be combined with other verification measures. The process may also include:
Authentication of government-issued IDs, including data stored on the electronic chip, to match the portrait with the user’s selfie.
Additional biometrics (e.g., fingerprint recognition) to reduce impersonation risks.
Database-driven verification to cross-check personal details like name or address with official records and watchlists.
One-time passcodes sent by SMS or email to confirm contact details.
Each additional layer increases the difficulty for attackers to misuse or conceal an identity.
Regardless of how many checks are added, it’s best to build them into a single, well-designed, automated process. This reduces friction for genuine users while keeping the system resilient to fraud.
How Regula can help prevent video injections
As a provider of complete identity verification solutions, Regula offers technologies for two mandatory checks that should be part of any robust IDV procedure:
Regula Document Reader SDK—A solution for ID document authentication based on recognition and analysis of the document’s content and security features.
Regula Face SDK—A solution for selfie verification powered by advanced liveness detection that helps identify even sophisticated presentation attacks.
Both solutions are deployed on-premises and can be fully customized to integrate seamlessly into existing verification workflows. Feel free to book a call with our representative to discuss your specific needs.