In a world where deepfakes spread faster than facts and our faces are everywhere online, does biometric authentication still keep us secure?
Our faces are everywhere – from social media profiles and biometric ID cards to pervasive CCTV networks across the city.
If biometric security depended solely on secrecy, then every uploaded selfie or AI-generated deepfake would put us at risk — yet biometric systems continue to secure billions of transactions worldwide every day.
Earlier this year, iProov found that when presented with deepfake video or images, only 0.1% of people could correctly identify them. In real-world scenarios, where awareness is lower, human vulnerability is likely even higher.
What does the future hold for biometric authentication and identity fraud? How do we mitigate the risks associated with deepfakes and stolen identities? We find out more from Dominic Forrest, Chief Technology Officer, iProov.
Identity fraud is on the rise in Asia Pacific. In an age of deepfakes and digital injection attacks, are biometric authentication methods obsolete?
Forrest: Not at all. In fact, biometric authentication, when implemented correctly, is more critical than ever. What’s becoming obsolete is static approaches such as “selfie-based” or single-frame liveness checks that can be easily spoofed. The next phase of identity assurance requires more resilient methods like Dynamic Liveness that can distinguish a live, present human from a replayed video, an injected stream, or a synthetic face.
This is why the conversation is moving toward science-based biometric technologies that not only resist today’s attack methods but also evolve through continuous monitoring of emerging threats.
What are the risks associated with stolen images or deepfakes, and how can businesses stay ahead to ensure facial authentication remains resilient?
Forrest: Across Asia Pacific, digital services have become inseparable from daily life. From government and banking services in Singapore, to mobile-first markets like the Philippines and Vietnam, millions of users are relying on biometric authentication, including facial verification. This rapid adoption, however, has created fertile ground for fraud.
The threat is compounded by advances in deepfake technology—convincing fake faces, voices, and even live video feeds that are designed to spoof identity verification systems. Attackers can launch digital injection attacks, where synthetic images or video streams are inserted directly into authentication systems, bypassing the camera altogether. Because these attacks never involve a physical spoof in front of a lens, they can evade many of the liveness checks businesses rely on today.
The consequences of sophisticated AI-driven fraud are serious and multifaceted: fraudulent transactions, large-scale identity theft, and impersonation can drain customer accounts or compromise sensitive systems. In one recent high-profile case, scammers used deepfake technology to impersonate a company executive in a video call, tricking an employee into authorizing a transfer of US$25 million.
According to the Global Association of Forensic Accountants (GAFA), deepfake incidents have increased tenfold from 2023 to 2025, representing a +900% growth over two years. The consequences go beyond financial loss. Stolen identities, drained accounts, and unauthorized access risk eroding the trust that underpins digital ecosystems across the region.
Resilience, therefore, must evolve beyond static selfies and basic liveness checks. Today’s biometric systems need to detect not only physical spoofs, such as masks or printed photos, but also advanced digital injection attempts. At the same time, they must give users a simple, reliable way to prove they are the right person, that they are real, and that they are physically present at the moment of authentication.
Staying ahead means embracing approaches that continuously learn from global threat intelligence and adapt in real time. It requires ongoing monitoring of evolving attack patterns and the ability to strengthen defenses dynamically. Only then can organizations maintain trust in a digital environment where fraud is increasingly fast-moving, automated, and powered by AI.
How should organizations distinguish and balance how biometric data is stored (privacy protection) versus how it’s used to verify identity (security)?
Forrest: Security and privacy are two sides of the same coin, and both are non-negotiable.
Protecting privacy means limiting data collection to only what is essential, and ensuring that any biometric information cannot be reconstructed, reverse-engineered, or misused. Protecting security means guaranteeing that once biometric data is captured, it cannot be spoofed, replayed, or digitally injected into an authentication system by attackers.
Best practice today involves using mathematically irreversible, unique biometric templates instead of raw images, encrypting data in transit and at rest. This means even if data were to be intercepted, it would be useless to attackers. Security keeps systems safe from external attacks, while privacy safeguards individuals from unnecessary exposure. Digital trust requires both working hand in hand.
When a business experiences identity fraud/theft, what should be the immediate remediation?
Forrest: The immediate priority in responding to a breach is containment. Organizations must first isolate affected systems and accounts to prevent further damage. At the same time, a rapid root cause analysis should be conducted to determine the attack vector—whether it was compromised credentials, a sophisticated deepfake, or an application vulnerability like an injection attack.
Once the initial point of compromise is understood, the organization can re-establish trust by requiring users to re-authenticate using stronger verification methods, such as secure biometric authentication capable of confirming both identity and genuine human presence.
Equally important is notifying stakeholders, patching the weakness, and continual monitoring, because fraudsters don’t stop after one attempt. Ultimately, the goal isn’t just recovery. It’s restoring trust while raising the bar so the same attack can’t succeed again.
Implementing science-based liveness detection helps by distinguishing real humans from synthetic representations, such as deepfakes. Biometric systems should not be static, but rather combine continuous monitoring with anomaly detection to spot unusual behaviors that may indicate an attack.
Advanced technical measures, including dynamic liveness checks and active threat intelligence, are crucial for identifying synthetic media. The question of “Is this person really who they say they are?”, lies at the heart of digital identity and it will become more important as online interactions expand.
What are the biggest threats facing digital identities today, and how can businesses prepare themselves and their customers for what’s next?
Forrest: The biggest threats to digital identity today come from AI-driven fraud like deepfakes, digital injection attacks, and synthetic identities. For example, in synthetic identity fraud, criminals piece together fragments of real information like names, addresses, or ID numbers, to fabricate entirely new identities. These “partly real, partly fake” profiles are notoriously hard to detect, slipping past many of the verification checks used by platforms.
But what’s particularly dangerous is the speed at which these threats evolve. Generative AI is now capable of producing convincing fake faces or even entire personas in seconds, and injection attacks allow fraudsters to bypass the camera entirely, slipping synthetic media straight into the authentication process. These threats are evolving too quickly for legacy tools like passwords, SMS OTPs, or even static selfie checks to keep up.
In Asia Pacific, the challenge is magnified. Mobile-first markets and the acceleration of digital onboarding mean fraudsters can launch scalable, cross-border attacks against banks, government services, and e-commerce platforms with unprecedented ease.
The answer lies in building trust into identity verification. That means ensuring systems can distinguish a genuine, live human from a spoof, replay, or digital injection attempt and that verification happens in real time. Those checks are critical in stopping the kinds of AI-enabled attacks we’re now seeing.
At the same time, security must go hand in hand with usability. Biometric systems must work not only for tech-savvy users but also for older generations, people with lower digital literacy, and those relying on basic smartphones. When identity verification feels both effortless and secure, trust builds naturally and adoption follows.


