Language

28 Mar 20247 min readin ID verification & biometrics

What Is Liveness Detection, and How Does It Help to Address Online Authentication Challenges?

Andrey Terekhin

Head of Product, Regula

How much would it cost for a successful presentation attack, wherein someone opens a bank account or checks in for a flight using an AI-generated ID and selfie? Hundreds of dollars? Thousands? What do you think?

The democratization and widespread availability of generative tools have reduced the cost to $15. At least, this is the price to get a shot of a fake driver’s license or passport on OnlyFake, an AI photo generator service. Deepfakes generated by the service look like they were captured in the real world, for example, on carpeting or bed sheets.

Unfortunately, people don't use these realistic creations purely for fun. For example, AI-generated images of IDs helped crypto scammers pass KYC checks on numerous crypto exchanges and financial services providers. 

This would have been impossible if the KYC flow included customer liveness detection, and matched the person with the document they were presenting in order to ensure that the ID belonged to them.

Let’s delve into the details.

Stay Tuned!

We'll deliver hand-picked content from Regula's experts into your inbox

What is liveness detection?

Generally, liveness detection involves authenticating whether the claimed identity submitted to your system represents a real person.  

The mechanism shaping this process was originally developed by the famous British mathematician Alan Turing. In 1950, he measured the ability of a computer to exhibit intelligent—or human-like—behavior during an imitation game, later named the Turing test.

Turing test scheme

Player C should determine whether it is another person B or computer A by analyzing the responses to written questions.

During the test, a player interacts with a computer and another person in a question-response manner without knowing who is who. If the player can't tell which one is the computer judging from the responses they get, the computer will pass the Turing test by demonstrating human-like intelligence.

In the liveness detection process, the computer becomes the player. The task is to identify if an applicant is a real human, or merely a fake.  

Many people attribute the term "liveness detection" to Turing as well. However, there is an opinion that it was initially used by Dorothy E. Denning, a US information security researcher, in her 2001 article for Information Security Magazine. In the article, Denning claimed that a good biometrics system should not rely on the user’s secrets, but rather on the ability to detect “liveness,” similar to how friends and colleagues recognize each other by face in the real world. 

A user’s selfie is now indeed one of the most popular options for verifying their liveness during online onboarding or authentication.

Types of liveness detection

Any liveness check requires user interaction with the verification system interface. The level of engagement in the process can differ. There are three common scenarios:

Active liveness

The first generation of liveness detection technology was based on active communication between a user and software. It implies following a set of instructions, typically in the form of a random sequence of actions. For example, an active liveness check based on facial recognition may include the following actions to perform: turn the head right, smile, blink, etc.

Although active liveness detection is a proven method to identify fakes, it’s less convenient for users, especially seniors.

Passive liveness

In contrast to active liveness, conducting a passive liveness check doesn’t require extra actions on the user’s part. Usually, it boils down to one thing a customer needs to do, like take a selfie. For this reason, passive liveness is considered more user-friendly and seamless compared to active liveness checks. 

The era of passive liveness emerged alongside the widespread adoption of smartphones equipped with high-resolution cameras and advancements in facial recognition technology. Given that selfies must meet a satisfactory quality standard, this requirement poses a limitation for users with less powerful mobile devices.

💡If you want explore more, this blog post provides you with details on the differences between active and passive authentication.

Hybrid liveness

Finally, some people add another category known as hybrid or semi-passive liveness. This technology is at the intersection of the two previous approaches. During authentication with a hybrid liveness check, the user should perform a simple step with an additional quick task. For example, they may need to take a selfie and smile into the mobile camera.

The idea behind hybrid liveness is to create a verification flow that is not too disruptive for customers, yet still more secure than passive liveness.

How does liveness detection work?

The prime purpose of liveness detection technology is to prevent fraudsters from illegally accessing online services by using deepfakes, stolen photos, video injections, video replays, silicone masks, and other forms of spoofing. 

In biometric verification with facial recognition, a liveness check enables the identification of non-anthropomorphic attributes in a photo presented by a user. Software solutions of this kind (often, a face liveness SDK) search for the presence of the following spoof artifacts:

  • Hi-res paper 2D photos and paper masks

  • Human-like dolls, latex, silicon, or 3D masks

  • Wax heads, mannequins, or head-only artifacts

  • Artificial skin tone, moiré noise, and unexpected shadows typical of deepfakes

  • Attributes of digital devices such as too intensive glare, etc.

Under the surface, such liveness detection algorithms are powered by neural networks. Trained on hundreds of thousands of face images with various backgrounds, they can recognize synthetic traits in photos submitted by users.

To perform a liveness check, the neural network scans a user’s face and generates a map representing the unique properties of the face. This map can be two-dimensional (X, Y) or three-dimensional (X, Y, Z), corresponding to 2D or 3D liveness, respectively.

These technologies also align with passive and active liveness checks. A passive approach often relies on 2D facial map generation. This is why a single user's selfie is sufficient to retrieve all the data the neural network needs for analysis. 

It’s easy to guess that the 3D liveness model is frequently implemented with an active flow. By prompting a user to perform specific spatial movements such as smiling or rotating their head, you can measure the Z-axis, or the depth of the object.

2D technology is considered faster, while 3D is more secure. This is why 3D liveness is recommended for use at critical points in the customer journey, such as payment approvals. 2D technology works best for lower-risk operations like face unlock.

Biometric verification systems utilize various human characteristics as authentication factors. While one company may authenticate users via selfies, another may rely on voiceprints. Regardless, the idea of a liveness check remains the same for different biometric factors used in identity verification: the algorithm must determine that the authentication data is being presented by a live person.

For example, voice liveness detection implies identifying synthetic artifacts left by speech generators and pre-recorded utterances in a user’s audio sample. To reveal discrepancies, such solutions analyze signal power distribution, voice frequency, tone reflections, etc.

Why is liveness detection key for biometric systems?

There is a never-ending rivalry between predators and prey. Fraudsters are advancing their techniques while preparing new sophisticated attacks, eager to find blind spots in the online identity verification flow.

The rise of generative AI tools plays into scammers’ hands. Now, they employ new-gen photo and video creators to make compelling synthetic identities and ID forgeries. 

This threat is recognized by numerous industry standards and regulations, such as the ISO/IEC 30107 series dedicated to biometric presentation attack detection. Liveness detection is a critical part of the current framework ensuring the integrity and security of biometric systems. 

Moreover, when verifying users online, it’s important to implement liveness detection technology for both biometrics and ID documents. Choosing your preferred approach—active, passive, or hybrid—always depends on your risk tolerance and business objectives.

Not surprisingly, liveness detection, like any technology, has its limitations. A reliable identification system must include all the necessary components, and address all the potential risks. Regula Face SDK, combined with Regula Document SDK, can help you build a robust identity verification system. Book a call with one of our representatives to learn more.

Regula Face SDK

Make face verification fast and secure

On our website, we use cookies to collect technical information. In particular, we process the IP address of your location to personalize the content of the site

Cookie Policy rules