The rise of Generative AI (GenAI) technologies is reshaping industries across the board. Finance-related businesses—especially fintechs, crypto providers, and banks operating fully or partly online—were among the first to feel the shift. These companies tend to be early adopters of new tech, but they’re also top targets for fraud.
In this blog post, Vipul Sharma, Senior Solutions Architect from QIIB, Qatar, and Nikita Dunets, Deputy Director of Digital Identity Verification from Regula, will explore how this powerful technology is changing identity verification (IDV) in banking—for better and worse.
Subscribe to receive a bi-weekly blog digest from Regula
What changed for banks with GenAI’s arrival?
When it comes to AI in IDV, there are two sides to the coin.
On the downside…
A rise in new fraud types
GenAI has made it easier and cheaper for fraudsters to create fakes—both forged documents and synthetic identities. For example, one of the largest dark web marketplaces, recently taken down by Dutch and US authorities, was offering fake IDs for just $9 per digital asset.
This has fueled new threats like deepfakes. These AI-generated media are used to create lifelike identities that don’t actually exist, or to impersonate real customers using stolen biometrics, such as selfies or voice recordings. With countless photos and videos publicly available on social media, scammers now have a massive pool of content to train their models and carry out attacks.
Importantly, there are AI-driven tools like FraudGPT that criminals can use to carry out cyberattacks and scams. FraudGPT, a subscription-based product, can generate deceptive content such as phishing pages and malicious code as well as fake IDs, photos, and videos.
In late 2024, the US Treasury’s FinCEN issued an alert warning that criminals are using GenAI-powered deepfakes to bypass bank ID checks, following a rise in suspicious activity reports from financial institutions.
A surge in AI-powered attacks
In 2025, impersonation attacks, including identity spoofing, biometric fraud, and deepfakes, have become a daily reality for many businesses. What’s more, these AI-driven threats are now outpacing traditional fraud methods like document forgery and social engineering.
According to a recent Regula survey, 33% of banks globally have already faced deepfake-related attacks. The highest concentration—35% of affected companies—is reported in the UAE.
Notably, AI-powered fraud tends to hit enterprise-level organizations first—a category that includes many banks. The reason is simple: banks manage large sums of money. According to Deloitte, fraud losses in banking driven by GenAI are expected to keep rising, reaching $40 billion in the US alone by 2027—up from $12.3 billion in 2023.
High-profile incidents
Fraudsters don’t stop at onboarding-stage attacks. Some go further, impersonating executives from major companies, including leading local banks, and launching fake campaigns on social media to trick victims.
One recent case in Indonesia involved scammers posting as Bank Central Asia (BCA) on TikTok. They used BCA’s name and logo, falsely promoted collateral-free, low-interest loans, and even claimed these offers were backed by the country's president. To make the scheme look more convincing, they used an image of BCA’s President Director, Jahja Setiaatmadja, which they pulled from a local news website.
Victims were asked to apply via a WhatsApp number, a classic move in phishing-style scams.
- Eroding customer trust and growing fear
In an age of synthetic media, large-scale data breaches, and the digitalization of national ID systems, many people are losing confidence in how organizations handle their personal data, especially biometrics. The rise of more sophisticated threats only deepens these concerns.
For banks and other finance-related companies, this presents a major challenge. Since IDV is a mandatory part of Know Your Customer (KYC) and Anti-Money Laundering (AML) procedures, earning and keeping customer trust has become even harder.
At QIIB, we’ve seen firsthand that the hardest part isn’t just the fraud itself, it’s balancing speed of onboarding with customer trust. GenAI makes both sides harder and easier at once.
On the positive side…
It would be a mistake to see only the downsides of any technology, and AI is no exception. It’s a double-edged sword that can also be used to improve security and efficiency. In fact, neural networks—the backbone of AI—now enhance many routine processes in IDV.
Smarter automation for biometrics and ID document checks
Thanks to automation, IDV software can now replace manual reviews and live video interviews during onboarding, making verification not just faster, but also more accurate and consistent.
Neural networks can detect the document and identify its type, spot security features like machine-readable zones, barcodes, and RFID chips, and read and cross-check data across multiple document elements.
When it comes to selfie verification, AI can reliably confirm the presence of a real person and match their live image with the portrait in their ID or customer profile.
Real-world example: UBS, the largest bank in Switzerland, automated its onboarding using e-passports, making the process faster, more accurate, and available 24/7.
Real-time monitoring and analytics
Modern IDV solutions increasingly include real-time anomaly detection powered by machine learning. These models track user behavior and device data during onboarding or transactions, flagging suspicious patterns, such as unusual locations, IP addresses, or typing behavior, that may signal fraud.
For example, if someone tries to open a new account from an unexpected region or using a mismatched device, the system can either require additional verification or block the session entirely.
AI also enables instant data correlation across sources, linking selfies, ID document details, and third-party databases, to spot inconsistencies and detect synthetic identities before they slip through.
Higher onboarding and transaction accuracy
A fully automated IDV flow greatly improves the customer experience during onboarding. It reduces false positives, speeds up verification—sometimes down to just a few seconds—and builds trust in the process.
Advanced algorithms can detect low-quality document scans and guide users to take a proper selfie on the first try. User-friendly hints also simplify the experience, helping people of all tech skill levels and age groups complete the process smoothly.
Real-world example: ABA Bank, the largest commercial bank in Cambodia, boosted its mobile account opening conversion rate by 78% thanks to more efficient document authentication during onboarding.
A push for anti-deepfake regulations
The evolution of GenAI is changing the regulatory landscape, as countries update existing rules and introduce new ones to address emerging threats. This shift is pushing banks to adopt stronger, AI-aware IDV practices.
Here are some of the key regulatory developments targeting deepfakes and AI use:
EU AI Act: In force since August 2024, with full compliance expected by 2026-2027, this legislation regulates AI use in IDV. Many IDV-related AI systems are now classified as “high-risk,” requiring banks to conduct risk assessments, log AI operations, use quality training data, and ensure human oversight of AI-driven decisions.
Denmark’s deepfake law: Denmark has amended its copyright law to state that every person has the right to their own body, facial features, and voice. This effectively treats a person’s likeness as intellectual property. Under the law—expected to pass by late 2025—realistic AI-generated imitations shared without consent would be illegal.
China’s AI content labeling rules: Since September 2025, all AI-generated or AI-modified content—images, videos, audio, or text—must be clearly labeled. Websites and apps are required to flag unmarked content as “suspected synthetic,” ensuring transparency for users.
What are the strategic IDV priorities for bank executives?
Today’s banks must stay ahead of fast-evolving threats while keeping pace with new AI regulations affecting IDV. That’s not easy. AI can be both a powerful tool and a potential risk. Navigating this landscape calls for a balanced, long-term strategy.
Here are some best practices to help strike that balance:
Treat next-gen IDV as a strategic investment
With most banks now operating fully or partially online, the IDV process has moved into digital environments—where sophisticated fraud is a constant risk. In this context, modern, accurate IDV software should be viewed as a strategic asset, not just an IT upgrade.
According to the updated 2025 standards from the Financial Action Task Force (FATF), remote onboarding is not automatically considered high-risk when effective digital IDV controls are in place. It can even be considered low-risk.
To fight both traditional and emerging fraud, the ideal IDV solution should combine automation, selfie and ID liveness checks, real-time activity monitoring, and smart customization based on risk profiles, location, and user behavior.
Also, it’s critical to include both data source control and injection attack detection in the toolkit. The first ensures that the verification data isn’t manipulated between the user’s device and the company’s server. This can be supported by server-side reverification, especially when NFC-enabled documents are checked. Injection attacks—where GenAI content is actively used—must also be blocked by modern IDV solutions, including Regula software.
Building IDV flows around specific business cases makes ongoing monitoring more effective. It also strengthens fraud prevention, even for existing customers whose accounts might be compromised or misused for illegal actions like money structuring or smurfing.
Build layered defenses with human oversight
No single tool can stop today’s AI-powered fraud. That’s why a multi-layered defense strategy is essential. Combining document authenticity checks, facial liveness detection, and suspicious activity monitoring helps banks tackle different types of attacks simultaneously.
Analyzing additional details in the IDV process is also critical. For example, in FinCEN’s Alert on on Fraud Schemes Involving Deepfake Media, banks are advised to flag inconsistencies such as:
A customer’s ID photo appearing significantly younger than the stated date of birth.
The use of third-party webcam plugins and rooted devices (which can be used for injection attacks) during live verification, which may allow pre-recorded content to be displayed.
Geographic or device data that doesn’t match the identity documents provided.
For this reason, human oversight—especially in regulated industries—remains essential. In high-risk or ambiguous cases, involving a human reviewer is still considered best practice. It also supports compliance with legal requirements, such as those in the EU AI Act, which emphasize transparency and accountability in AI-driven decisions.
Human input is also crucial for spotting errors that AI might miss. By combining advanced technology with expert review, banks can create a reliable model where AI handles volume and speed, while humans manage exceptions and fine-tune the system over time.
Ensure flexibility in IDV
As GenAI evolves, so do fraud tactics. That’s why banks need an agile, “always-learning” strategy for fraud and risk management. In this approach, IDV algorithms are continuously updated and retrained with fresh fraud data to quickly identify and patch vulnerabilities.
Hands-on testing is also essential. Both document and biometric verification systems benefit from real-world feedback and industry benchmarks. Regular testing helps fine-tune accuracy, especially as threats shift and new edge cases emerge.
Some vendors, like Regula, support this approach by offering test samples—including documents with a chip for NFC verification—so banks can validate and improve their systems in realistic scenarios.
Maintain regulatory readiness
Keeping up with current and emerging legal requirements is essential for both compliance and customer data protection. While no jurisdiction has yet fully launched clear GenAI regulations, they are coming—and fast.
The smarter approach is to stay ahead of the curve by aligning internal policies with evolving guidelines on responsible AI use in IDV. This includes practices like documenting how AI makes decisions and testing for bias to avoid discriminatory outcomes.
It’s also the right time to take a proactive role in shaping industry standards. For example, by contributing to best practices for AI in KYC and AML, banks not only show leadership in responsible AI adoption—they also strengthen customer trust.
Foster a fraud-aware culture
Technology alone isn’t enough to stop fraud. Regular employee training and customer education are just as critical—especially as AI-powered attacks become more advanced.
Fraudsters still rely on traditional methods like social engineering to trick bank staff or customers and gain access to sensitive data. On top of that, mass data breaches continue to provide attackers with the personal information needed to launch convincing schemes.
To stay ahead, banks should actively educate both employees and clients about GenAI-enabled fraud techniques. This helps build a culture where unusual activity is flagged, deepfake scams are recognized, and people—both inside and outside the organization—become stronger partners in fraud prevention.
Final thoughts
In the GenAI era, banks face a unique challenge: fighting AI-powered threats with AI-driven tools. It's a case of meeting like with like—using neural networks and smart automation to counter sophisticated fraud. At the same time, banks must shoulder greater compliance responsibilities, even as AI regulations remain a work in progress in most regions.
But with uncertainty comes opportunity. Forward-thinking banks can help shape the next generation of security standards while strengthening their own IDV strategies. Those who lead now may set the benchmark for the industry in the years ahead.





.webp)
