In Asia Pacific, deepfakes have moved from theoretical risk to a direct challenge against the integrity of digital media and communications.
Deepfake attacks in APAC have surged over the last couple of years, with fraudsters leveraging AI-generated voices and videos to bypass security controls and steal millions from businesses.
While initial concerns about deepfake technology focused on potential political misinformation and disinformation, with the rise of generative AI, more attention has been given recently to the threat deepfakes pose to businesses and individuals.
Scams targeting businesses and individuals have grown more sophisticated, with deepfakes increasingly used to make fraudulent messages and videos appear more believable. This has led to significant financial losses and raised urgent questions about trust and safety in our increasingly AI-powered world.
Voice and video impersonation scams are rising, revealing that the problem extends far beyond technology; it’s a profound crisis of trust.
This crisis in trust has dire consequences such as potential misinformation. How do we address this crisis, and is technology the answer to this? CybersecAsia finds out more from Tony Anscombe, Chief Security Evangelist, ESET, why the deepfake crisis is a fundamental trust issue, how deepfake exploits human vulnerabilities and erodes confidence in digital interactions, and what organizations can do about it.
How far has deepfake technology advanced, and how serious a cybersecurity problem is it today?
Anscombe: Deepfake technology has evolved at speed and has the potential to become one of the most pressing cybersecurity threats across APAC. In Singapore alone, the 2024 Cybersecurity Public Awareness Survey revealed that while nearly 80% of people were confident in spotting deepfakes, only 1 in 4 could actually tell them apart from real videos, highlighting just how convincing these fakes have become.
Scammers are now exploiting deepfakes as a tool in their malicious campaigns. There are instances where they have been used to bypass some forms of biometric checks, impersonate executives in video calls, and trick employees into wiring hundreds of thousands of dollars into fraudulent accounts.
A prominent case reported in Singapore early this year nearly cost a multinational company over half a million dollars. The ease of accessibility to deepfake technology allows cybercriminals to take threats such as business email compromise to a new level of social engineering.
Social media is also becoming ground zero for deepfake-driven scams. According to ESET’s Global H2 2024 Threat Report, there’s been a 335% surge in deepfake scams on social media platforms.
One example: bad actors disguising fraudulent investment opportunities by using fake videos and brand logos to look legitimate.
Some industry categories are at the forefront of the risk from deepfake fraud; for example, the insurance industry often relies on images and videos from customers when processing claims, and scammers can submit modified pictures to show damage to vehicles that are not real, even extending to modified dashcam or home doorbell footage. And when the insurer attempts to verify the claimant via a video call, the scammer may even stretch to using real-time deepfake technology to deceive the claims representative.
Deepfakes are no longer a future threat – they’re already here – undermining trust, potentially compromising systems, and putting real money and reputations at risk.
In what ways are deepfakes eroding the foundation of digital trust in Asia Pacific societies?
Anscombe: In a region where daily life is increasingly digital, whether it’s banking, shopping, learning, or engaging with government services, deepfakes are making it harder to tell what’s real and who to trust.
Bad actors exploit our most trusted senses – sight and sound. By mimicking voices, faces, and identities with alarming precision. From financial scams and identity theft to emotionally manipulative schemes, deepfakes are fueling a new wave of fraud that chips away at public confidence in digital platforms.
The impact goes beyond money. Deepfakes are being weaponized to spread political misinformation and discredit public figures, as seen in Singapore, where a fake video of former President Halimah Yacob prompted a police report. This kind of deception undermines trust in institutions, leaders, and democratic processes.
With Asia Pacific being one of the most mobile-first and social media-driven regions in the world, deepfakes can spread like wildfire, sowing doubt, confusion, and distrust at scale.
As generative AI tools like Sora push the boundaries of what’s possible, the challenge now is not just detecting fakes but rebuilding trust in an era where even video evidence can’t be taken at face value.
Should organizations and governments set rules and regulations to protect their people and businesses against deepfake attacks?
Anscombe: Governments and organizations must set clear rules to protect people and businesses from deepfake attacks. Regulation is key to ensuring responsible AI use, but policies alone aren’t enough.
In today’s threat landscape, waiting to react is no longer an option. We need to stay one step ahead.
How can organizations in APAC stay ahead of deepfake scammers, who don’t have to abide by regulatory requirements in the use of AI and deepfake technologies?
Anscombe: For organizations in APAC, that means adopting a proactive, multi-layered defense built on people, processes, and technology.
- Start with the basics: Frequent fraud risk assessments and AI-aware anti-fraud policies that guide how to handle deepfake scenarios – from internal protocols to customer touchpoints.
- People are your first line of defense: Ongoing employee training, customer education, and simulation-based learning, like mock deepfake attacks, can help build instincts without creating fear. Even HR practices like live interviews and deeper background checks can help detect potential insider threats.
- Use AI to fight AI: Use deepfake detection tools in critical processes like KYC (Know Your Customer) and biometric verification. Leverage generative AI to safely create synthetic training data that helps your defenses get smarter over time.
- Stay informed of the latest techniques and scams being used: Understanding how bad actors adapt to new methods to deceive will ensure your business remains prepared.
- Importance of businesses and consumers to report instances of deepfake fraud, whether trivial or serious: Governments assign law enforcement and legislation resources based on the prevalence of threat and reporting instances provides the data to make such decisions.
Bad actors don’t wait for permission to innovate, so neither should we. The same AI tools used to deceive can also be adapted and used to defend. The organizations that embrace this mindset will be the ones that protect their customers, preserve brand trust, and stay resilient in a rapidly evolving digital world.