Language

01 Apr 2026in IDV basics

Top 5 Biggest IDV Incidents of Q1 2026

Nikita Dunets

Vice President of Digital Identity Verification

It’s safe to say that Q1 2026’s IDV incidents didn’t suffer from a lack of variety. In less than three months, a cryptocurrency exchange was accused of 6.65 million KYC violations, a major bank’s onboarding checks were bypassed with deepfake-assisted fraud, and online platforms had their share of failures with user age checks.

The common thread (if there is one to be found) is that each case exposed a different weak point in remote identity checks, and each one carried real consequences, whether in fines, court action, rollout delays, or public backlash.

In this identity verification news report, we will look at the five biggest IDV incidents of Q1 2026 and offer a useful snapshot of where identity systems are still failing and what companies should fix before those weaknesses turn into their own headline problem.

Subscribe

Get posts like this in your inbox with the bi-weekly Regula Blog Digest!

1. Bithumb’s 6.65 million KYC compliance violations

South Korea’s Financial Intelligence Unit (FIU) published its action against Bithumb on March 17. The FIU said it found a staggering 6.65 million violations, including about 3.55 million customer verification failures and about 3.04 million cases where trading was not blocked even though customer checks were incomplete. It also said Bithumb supported 45,772 virtual asset transfer transactions with 18 unregistered foreign operators.

The violations were described as follows:

  • Blurred IDs and IDs with part of the data hidden were still accepted.

  • Customers with blank or inadequate address fields were still cleared.

  • Reverification was completed without collecting a fresh ID, using the original sign-up document instead.

  • Reverification deadlines were missed.

  • Users whose money-laundering risk level had gone up were allowed to keep trading without extra customer checks.

  • Driver’s license checks were completed without the encrypted serial number that the FIU said was needed for an authenticity check.

  • Copies of customer ID documents were not kept in about 16,000 cases.

The penalty package matched the scale of the findings. The FIU ordered a six-month partial business suspension running March 27 through September 26, 2026, and said Bithumb would face a total fine of 36.8 billion won. 

The chief executive received a reprimand, the reporting officer received a six-month suspension, and the notice added a detail compliance teams should not miss: existing users could still trade, while newly registered users were temporarily blocked only on external virtual-asset transfers, not on all activity.

2. ABN AMRO’s onboarding weakness exposed via deepfake

The quarter’s clearest courtroom example of deepfake-assisted onboarding fraud came out of the Netherlands. On March 18, DutchNews reported that prosecutors told an Amsterdam court that a man had opened 46 ABN AMRO bank accounts in other people’s names by using deepfake technology to bypass the bank’s face-recognition checks.

Some reported details of the case include the following statements:

  • ABN AMRO’s mobile onboarding flow asked for a photo ID and a selfie. Prosecutors said the suspect used doctored images of his own face to fool that check.

  • Some victim documents reportedly came from social media. Others came from a Dutch classifieds site, where applicants were told to send IDs for “verification.”

  • One application used a woman’s ID, but the selfie still showed a man. That mismatch helped expose the wider fraud.

  • NL Times reported that police found debit cards and PINs for multiple ABN AMRO accounts, dozens of fake IDs, and chat logs with ChatGPT in which the defendant asked how to bypass the bank’s security measures.

  • CCTV reportedly showed cash deposits into several of the accounts, which authorities said could point to money laundering.

DutchNews said prosecutors asked for a 30-month prison sentence, with six months suspended, and €6,240 in compensation for the bank.

3. Roblox’s age assurance system targeted by public spoofing

Roblox made one of the quarter’s biggest bets on face-based age checks, which later turned into an uphill battle against public spoofing. In a January 7 post, the company said age checks for chat were rolling out globally and said it was the first large online gaming platform to require facial age checks for users of all ages to access chat. The system was meant to limit communication between adults and children younger than 16 and require parental consent for users younger than 9 to use chat.

Just under a week later, WIRED reported that users, parents, and developers had already filled forums and social feeds with complaints about age misclassification. It was quickly followed by an Engadget report on children also spoofing the system in ways that looked ridiculous and serious at the same time.

Summing up the points brought up by the two identity verification news reports:

  • WIRED reviewed hundreds of posts from users and parents who said the system got their ages wrong. They cited a 23-year-old allegedly put in the 16 to 17 group and an 18-year-old allegedly pushed into the 13 to 15 group.

  • Engadget said videos showed children fooling the check with avatar images, drawn-on wrinkles and stubble, and even a photo of Kurt Cobain.

Roblox did react: in February, the company said adults had a one-time reset option, parents had a one-time chance to correct a child’s age, and the platform would ask for another age check if later behavior strongly suggested the account holder was much older or younger than the recorded age.  

They also claimed that 45% of its 144 million daily active users had already completed an age check by facial age estimation or ID verification.

Those fixes were sensible enough, but the whole situation only highlighted that once a face-age system is live at global consumer scale, it needs to provide proper user experience. In the end, most people don’t judge its efficiency by the lab accuracy score, but by how it holds up when millions of users try it, expecting frictionless performance.

4. Discord’s facial age checks backfire

Discord opened Q1 with a similarly ambitious age assurance push. On February 9, the company said it would roll out teen-by-default settings globally in early March. Adults would be needed for age-restricted servers and channels, as well as for certain safety settings, while users who could not be confidently classified by Discord’s internal model would be asked to verify by facial age estimation or ID submission. 

The company also mentioned its internal age model used account-level cues such as account tenure, payment method data, and high-level patterns in server use. It also said the facial-age process ran on the device and that Discord itself would get only an age group, not the face scan.

However, on February 11, 404 Media reported on a newly released browser tool that claimed to bypass Discord’s age check with a manipulable 3D model of a synthetic adult male face.

Discord changed course less than two weeks later: on February 24, TechCrunch reported that the company was delaying global rollout until the second half of 2026. Discord said that about 90% of users would never need to verify at all, admitted that its earlier communication had gone badly, said more methods would be added, including credit card verification, and said future partners would need to run age verification fully on the user’s device.

5. Reddit’s self-declared user age problem

Our closing entry came from a platform that suffered from the opposite problem to Discord and Roblox: the absence of a robust age check. On February 24, the UK Information Commissioner’s Office (ICO) imposed a £14,472,500 penalty on Reddit, as they failed to apply any robust age-assurance mechanism.

More specifically, the ICO reported that Reddit had barred children under 13 in its terms, but still relied on self-declared age at sign-up. The regulator said Reddit did not put age-assurance measures in place until July 2025 and warned that self-declaration is easy for children to bypass.

In turn, this meant it had no lawful basis for processing the personal data of children under 13, and that it had also failed to carry out a data protection impact assessment on risks to children before January 2025.

In an open letter dated March 12, the ICO told social media and video-sharing platforms to move past self-declared age and use available age-assurance technology to keep young children off services that are not built for them. The letter cited the Reddit fine as part of that push.

Staying one step ahead of attackers and compliance gaps

The above five cases may look different on the surface, but they point to a more specific problem: one check or one security layer is not enough, and verification flows can also fail when the overall configuration is too relaxed for the level of risk. The next wave of identity fraud will not rely on one forged document or one deepfake alone, but on fast, repeated attempts that test which controls are easiest to get through. 

This is why a solution like Regula IDV Platform can be very effective against the new kind of fraud: it combines document, face, and age verification in one system, keeps identity data in user profiles with full history, and gives teams role-based access, audit visibility, and a choice of on-prem, cloud, or hybrid deployment. 

More specifically, Regula IDV Platform can:

  • Detect document fraud by checking VIZ, MRZ, barcode, and RFID data against each other, running document liveness, and analyzing dynamic security features such as holograms and optically variable ink (for more than 16,000 document templates from 254 countries and territories).

  • Match a selfie to the portrait on the document and, where available, the chip portrait, while using active or passive liveness to stop static images, printed photos, video replays, video injections, and masks.

  • Verify age inside the same flow through built-in age-assurance support and workflow rules for age verification, instead of falling back on self-declared age or a weak face-only gate.

  • Route higher-risk or incomplete cases into stricter workflows, manual review, regional checks, or AML and PEP screening instead of sending every user through one light-touch path.

  • Conduct geo, IP and device checks to support extended identity verification and build a historical device footprint for every profile, preserving device attributes collected across interactions.

  • Re-check and authorize returning users inside the same profile record, keep the history of documents, biometrics, and attachments, and reuse prior verified data without rebuilding the case from scratch.

  • Limit what fraud, compliance, operations, or support staff can view, edit, export, or download through roles, permissions, workflow access, profile-group access, and audit controls.

  • Keep detailed logs of every interaction and a full profile history to support compliance work and regulatory requirements in case of an audit.

  • Integrate with existing client and third-party systems, including support and pre-integrations for popular AML, PEP, and sanctions screening services, so verification results move into the next control point without copy-paste or handoffs.

Explore Regula IDV Platform

See how you can verify and manage customer identities with a single, all-in-one solution.

On our website, we use cookies to collect technical information. In particular, we process the IP address of your location to personalize the content of the site

Cookie Policy rules