Language

Press release - April 27, 2023

One-Third of Global Businesses Already Hit by Voice and Video Deepfake Fraud

The rise of AI-generated identity fraud like deepfakes is alarming, with 37% of organizations experiencing deepfake voice fraud and 29% falling victim to deepfake videos, according to a survey by Regula, a global developer of forensic devices and identity verification (IDV) solutions. The increasing accessibility of artificial intelligence technology for creating deepfakes is making the risks mount, posing a significant challenge for businesses and individuals alike.

 

Artificial intelligence can be used to create increasingly realistic and convincing deepfakes, making it more difficult to distinguish between genuine and manipulated content. Fake biometric artifacts like deepfake voice or video are perceived as real threats by 80% of companies, according to Regula’s survey.* And businesses in the USA seem to be the most concerned: about 91% of organizations consider it to be a growing threat. 

Advanced identity fraud

The increasing accessibility of AI technology poses a new threat: it may become easier for individuals with malicious intent to create deepfakes, amplifying the threat to businesses and individuals alike.

quote

AI-generated fake identities can be difficult for humans to detect, unless they are specially trained to do so. While neural networks may be useful in detecting deepfakes, they should be used in conjunction with other antifraud measures that focus on physical and dynamic parameters, such as face liveness checks, document liveness checks via optically variable security features, etc. Currently, it is difficult or even impossible to create deepfakes that display expected dynamic behavior, so verifying the liveliness of an object can give you an edge over fraudsters. In addition, cross-validating user information with biometric checks and recent transaction checks can help ensure a thorough verification process.

— Ihar Kliashchou, Chief Technology Officer at Regula

At the same time, advanced identity fraud is not only about AI-generated fakes. According to Regula’s survey, nearly half of the organizations globally (46%) experienced synthetic identity fraud in the past year. Also known as “Frankenstein” identity, this is a type of scam where criminals combine real and fake ID information to create totally new and artificial identities. It’s usually used to open bank accounts or make fraudulent purchases.

Organizations targeted by identity fraud

Obviously, the Banking sector is the most vulnerable to such kind of identity fraud. Nearly all the companies in the industry (92%) surveyed by Regula perceive synthetic fraud as a real threat, and almost half (49%) have recently come across this scam. 

Emerging identity fraud methods

Nowadays, to prevent the majority of current identity fraud, companies should enable sophisticated document verification in addition to comprehensive biometric checks. It's now crucial to include the following tools in their arsenal:

  • Thorough ID verification. It’s vital to enable extended document verification when proving someone’s identity remotely. A company should be able to establish the widest range of authenticity checks comprising all the security features in IDs. Even in a zero-trust-to-mobile scenario with NFC-based verification of electronic documents, chip authenticity can be reverified on the server side, which is currently the most secure method to prove that the document is genuine. Moreover, for those running an international business, utilizing a comprehensive document template database that includes a wide range of templates from numerous countries and territories is crucial. This helps organizations from any part of the world to validate and authenticate nearly any identity document, whether on-site or remotely, preventing fraud and mitigating security risks.

  • Biometric verification. The indispensable second half of the process is quick and robust liveness verification to prove that no malefactor is trying to present non-live imagery (a mask, printed image, or digital photo) during this check. It should even go further: biometric verification solutions should also match a person’s selfie with their ID portrait and any database an organization utilizes to ensure that the person is the same. To ensure that fraudsters cannot reuse users' liveness sessions for tampering, the enrollment process for every company's requirements should be set up with unique parameters. The solution should also support multiple attributes to bind to a photo, such as a person's name, age, gender, driver's license number, credit score, etc., for a more reliable and secure enrollment process.

 

Overall, currently, an effective identity verification process involves utilizing a combination of techniques, along with the widest scope of cross-validations of a user's information and attributes.

Read the full report looking into the state of the identity verification in 2022 here.

Download the full infographic. 

 

*The research was initiated by Regula and conducted by Sapio Research in December 2022 and January 2023 using an online survey of 1,069 Fraud Detection/Prevention decision makers across the Financial Services (including Banking and FinTech), Technology, Telecoms, and Aviation sectors. The respondent geography included Australia, France, Germany, Mexico, Turkey, the UAE, the UK, and the USA.

On our website, we use cookies to collect technical information. In particular, we process the IP address of your location to personalize the content of the site

Cookie Policy rules