Cybersecurity News in Asia

RECENT STORIES:

SEGA moves faster with flow-based network monitoring
Tackling next year’s enterprise cyber resilience trends: five strategi...
Web browsers that rank lowest for privacy protection
The impact of APAC’s AI buildout on cybersecurity in 2026
The quantum future is coming – and hackers are already preparing for i...
Autocrypt Announces Product Release of Post-Quantum PKI Product, Pione...
LOGIN REGISTER
CybersecAsia
  • Features
    • Featured

      Web browsers that rank lowest for privacy protection

      Web browsers that rank lowest for privacy protection

      Wednesday, December 10, 2025, 8:30 AM Asia/Singapore | Features, Newsletter
    • Featured

      The impact of APAC’s AI buildout on cybersecurity in 2026

      The impact of APAC’s AI buildout on cybersecurity in 2026

      Tuesday, December 9, 2025, 2:57 PM Asia/Singapore | Features
    • Featured

      Is your AI secretly sabotaging your organization?

      Is your AI secretly sabotaging your organization?

      Monday, December 1, 2025, 4:25 PM Asia/Singapore | Features, Newsletter
  • Opinions
  • Tips
  • Whitepapers
  • Awards 2025
  • Directory
  • E-Learning

Select Page

Features

Biometrics and the digital identity crisis today

By Victor Ng | Tuesday, October 28, 2025, 3:30 PM Asia/Singapore

Biometrics and the digital identity crisis today

In a world where deepfakes spread faster than facts and our faces are everywhere online, does biometric authentication still keep us secure?

Our faces are everywhere – from social media profiles and biometric ID cards to pervasive CCTV networks across the city.

If biometric security depended solely on secrecy, then every uploaded selfie or AI-generated deepfake would put us at risk — yet biometric systems continue to secure billions of transactions worldwide every day.

Earlier this year, iProov found that when presented with deepfake video or images, only 0.1% of people could correctly identify them. In real-world scenarios, where awareness is lower, human vulnerability is likely even higher.

What does the future hold for biometric authentication and identity fraud? How do we mitigate the risks associated with deepfakes and stolen identities? We find out more from Dominic Forrest, Chief Technology Officer, iProov.

Identity fraud is on the rise in Asia Pacific. In an age of deepfakes and digital injection attacks, are biometric authentication methods obsolete?

Forrest: Not at all. In fact, biometric authentication, when implemented correctly, is more critical than ever. What’s becoming obsolete is static approaches such as “selfie-based” or single-frame liveness checks that can be easily spoofed. The next phase of identity assurance requires more resilient methods like Dynamic Liveness that can distinguish a live, present human from a replayed video, an injected stream, or a synthetic face.

This is why the conversation is moving toward science-based biometric technologies that not only resist today’s attack methods but also evolve through continuous monitoring of emerging threats.

What are the risks associated with stolen images or deepfakes, and how can businesses stay ahead to ensure facial authentication remains resilient?

Forrest: Across Asia Pacific, digital services have become inseparable from daily life. From government and banking services in Singapore, to mobile-first markets like the Philippines and Vietnam, millions of users are relying on biometric authentication, including facial verification. This rapid adoption, however, has created fertile ground for fraud.

The threat is compounded by advances in deepfake technology—convincing fake faces, voices, and even live video feeds that are designed to spoof identity verification systems. Attackers can launch digital injection attacks, where synthetic images or video streams are inserted directly into authentication systems, bypassing the camera altogether. Because these attacks never involve a physical spoof in front of a lens, they can evade many of the liveness checks businesses rely on today.

The consequences of sophisticated AI-driven fraud are serious and multifaceted: fraudulent transactions, large-scale identity theft, and impersonation can drain customer accounts or compromise sensitive systems. In one recent high-profile case, scammers used deepfake technology to impersonate a company executive in a video call, tricking an employee into authorizing a transfer of US$25 million.

According to the Global Association of Forensic Accountants (GAFA), deepfake incidents have increased tenfold from 2023 to 2025, representing a +900% growth over two years. The consequences go beyond financial loss. Stolen identities, drained accounts, and unauthorized access risk eroding the trust that underpins digital ecosystems across the region.

Resilience, therefore, must evolve beyond static selfies and basic liveness checks. Today’s biometric systems need to detect not only physical spoofs, such as masks or printed photos, but also advanced digital injection attempts. At the same time, they must give users a simple, reliable way to prove they are the right person, that they are real, and that they are physically present at the moment of authentication.

Staying ahead means embracing approaches that continuously learn from global threat intelligence and adapt in real time. It requires ongoing monitoring of evolving attack patterns and the ability to strengthen defenses dynamically. Only then can organizations maintain trust in a digital environment where fraud is increasingly fast-moving, automated, and powered by AI.

How should organizations distinguish and balance how biometric data is stored (privacy protection) versus how it’s used to verify identity (security)?

Forrest: Security and privacy are two sides of the same coin, and both are non-negotiable.

Protecting privacy means limiting data collection to only what is essential, and ensuring that any biometric information cannot be reconstructed, reverse-engineered, or misused. Protecting security means guaranteeing that once biometric data is captured, it cannot be spoofed, replayed, or digitally injected into an authentication system by attackers.

Best practice today involves using mathematically irreversible, unique biometric templates instead of raw images, encrypting data in transit and at rest. This means even if data were to be intercepted, it would be useless to attackers. Security keeps systems safe from external attacks, while privacy safeguards individuals from unnecessary exposure. Digital trust requires both working hand in hand.

When a business experiences identity fraud/theft, what should be the immediate remediation?

Forrest: The immediate priority in responding to a breach is containment. Organizations must first isolate affected systems and accounts to prevent further damage. At the same time, a rapid root cause analysis should be conducted to determine the attack vector—whether it was compromised credentials, a sophisticated deepfake, or an application vulnerability like an injection attack.

Once the initial point of compromise is understood, the organization can re-establish trust by requiring users to re-authenticate using stronger verification methods, such as secure biometric authentication capable of confirming both identity and genuine human presence.

Equally important is notifying stakeholders, patching the weakness, and continual monitoring, because fraudsters don’t stop after one attempt. Ultimately, the goal isn’t just recovery. It’s restoring trust while raising the bar so the same attack can’t succeed again.

Implementing science-based liveness detection helps by distinguishing real humans from synthetic representations, such as deepfakes. Biometric systems should not be static, but rather combine continuous monitoring with anomaly detection to spot unusual behaviors that may indicate an attack.

Advanced technical measures, including dynamic liveness checks and active threat intelligence, are crucial for identifying synthetic media. The question of “Is this person really who they say they are?”, lies at the heart of digital identity and it will become more important as online interactions expand.

What are the biggest threats facing digital identities today, and how can businesses prepare themselves and their customers for what’s next?

Forrest: The biggest threats to digital identity today come from AI-driven fraud like deepfakes, digital injection attacks, and synthetic identities. For example, in synthetic identity fraud, criminals piece together fragments of real information like names, addresses, or ID numbers, to fabricate entirely new identities. These “partly real, partly fake” profiles are notoriously hard to detect, slipping past many of the verification checks used by platforms.

But what’s particularly dangerous is the speed at which these threats evolve. Generative AI is now capable of producing convincing fake faces or even entire personas in seconds, and injection attacks allow fraudsters to bypass the camera entirely, slipping synthetic media straight into the authentication process. These threats are evolving too quickly for legacy tools like passwords, SMS OTPs, or even static selfie checks to keep up.

In Asia Pacific, the challenge is magnified. Mobile-first markets and the acceleration of digital onboarding mean fraudsters can launch scalable, cross-border attacks against banks, government services, and e-commerce platforms with unprecedented ease.

The answer lies in building trust into identity verification. That means ensuring systems can distinguish a genuine, live human from a spoof, replay, or digital injection attempt and that verification happens in real time. Those checks are critical in stopping the kinds of AI-enabled attacks we’re now seeing.

At the same time, security must go hand in hand with usability. Biometric systems must work not only for tech-savvy users but also for older generations, people with lower digital literacy, and those relying on basic smartphones. When identity verification feels both effortless and secure, trust builds naturally and adoption follows.

Share:

Previous6 practical steps to strengthening M&A cyber due diligence
NextICAC’s first-ever AI and technologies training in anti-corruption brings together 22 ACAs to leverage innovations in graft fighting

Related Posts

2022 was a landmark year of cyber incidents: make 2023 better!

2022 was a landmark year of cyber incidents: make 2023 better!

Tuesday, January 10, 2023

Addressing the changing face of post-COVID cloud security in APAC

Addressing the changing face of post-COVID cloud security in APAC

Thursday, December 14, 2023

Stop cyber defenders from chasing their own tails: and boost their effectiveness!

Stop cyber defenders from chasing their own tails: and boost their effectiveness!

Wednesday, January 17, 2024

How AI can turbocharge SIEM

How AI can turbocharge SIEM

Tuesday, June 25, 2024

Leave a reply Cancel reply

You must be logged in to post a comment.

Voters-draw/RCA-Sponsors

Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
previous arrow
next arrow

CybersecAsia Voting Placement

Gamification listing or Participate Now

PARTICIPATE NOW

Vote Now -Placement(Google Ads)

Top-Sidebar-banner

Whitepapers

  • Closing the Gap in Email Security:How To Stop The 7 Most SinisterAI-Powered Phishing Threats

    Closing the Gap in Email Security:How To Stop The 7 Most SinisterAI-Powered Phishing Threats

    Insider threats continue to be a major cybersecurity risk in 2024. Explore more insights on …Download Whitepaper
  • 2024 Insider Threat Report: Trends, Challenges, and Solutions

    2024 Insider Threat Report: Trends, Challenges, and Solutions

    Insider threats continue to be a major cybersecurity risk in 2024. Explore more insights on …Download Whitepaper
  • AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    The future of cybersecurity is a perfect storm: AI-driven attacks, cloud expansion, and the convergence …Download Whitepaper
  • Data Management in the Age of Cloud and AI

    Data Management in the Age of Cloud and AI

    In today’s Asia Pacific business environment, organizations are leaning on hybrid multi-cloud infrastructures and advanced …Download Whitepaper

Middle-sidebar-banner

Case Studies

  • Bangladesh LGED modernizes communication while addressing data security concerns

    Bangladesh LGED modernizes communication while addressing data security concerns

    To meet emerging data localization/privacy regulations, the government engineering agency deploys a secure, unified digital …Read more
  • What AI worries keeps members of the Association of Certified Fraud Examiners sleepless?

    What AI worries keeps members of the Association of Certified Fraud Examiners sleepless?

    This case study examines how many anti-fraud professionals reported feeling underprepared to counter rising AI-driven …Read more
  • Meeting the business resilience challenges of digital transformation

    Meeting the business resilience challenges of digital transformation

    Data proves to be key to driving secure and sustainable digital transformation in Southeast Asia.Read more
  • Upgrading biometric authentication system protects customers in the Philippines: UnionDigital Bank

    Upgrading biometric authentication system protects customers in the Philippines: UnionDigital Bank

    An improved dual-liveness biometric framework can counter more deepfake threats, ensure compliance, and protect underbanked …Read more

Bottom sidebar

  • Our Brands
  • DigiconAsia
  • MartechAsia
  • Home
  • About Us
  • Contact Us
  • Sitemap
  • Privacy & Cookies
  • Terms of Use
  • Advertising & Reprint Policy
  • Media Kit
  • Subscribe
  • Manage Subscriptions
  • Newsletter

Copyright © 2025 CybersecAsia All Rights Reserved.