AI-generated disinformation and fraud is a growing concern, with the easy accessibility and availability of generative-AI tools, especially with national elections coming up across APAC…
In Sumsub’s Fraud Exposure Survey 2024, 85% of APAC respondents are concerned about deepfakes, expressing fear about their future impact on elections.
According to the study, identity fraud saw a 121% year-on-year increase across APAC in 2024, with notable spikes in Singapore (207%), Indonesia (201%), and Thailand (206%). Amid such scams, deepfake fraud stand out, seeing a 194% year-on-year spike in APAC.
Globally, deepfakes have increased by 245%, now making up 7% of all fraud attempts in 2024. Seven of the top 10 jurisdictions for such fraud networks are concentrated in the APAC region, namely Thailand, China, Bangladesh, Vietnam, Cambodia, Hong Kong, and Singapore.
With elections coming up in several nations in the region, CybersecAsia sought out some insights from James Lee, Legal Director, APAC, Sumsub.
How is AI-generated misinformation affecting the digital landscape for an election-heavy year?
Lee: AI-generated misinformation is often used to spread fake news during elections, manipulating voters’ confidence and distorting their perceptions of candidates and issues. This can lead to confusion about policies, create unfounded fears, and promote division among the electorate.
With the increasing sophistication of AI tools, the creation of convincing fake news, deepfakes, and misleading narratives has become easier than ever.
According to our Q1 verification and identity fraud data, there has been a staggering 245% year-on-year increase in deepfakes worldwide, particularly concerning as numerous countries are set to hold elections in 2024 and 2025.
In the Asia-Pacific region, countries like South Korea (1625%), Indonesia (1550%), and India (280%) have seen increases in deepfake incidents that far surpass the global average, revealing a concerning trend that threatens to distort political discourse and voter perceptions.
Other key economies in the Asia Pacific region, such as China (2800%) and Singapore (1100%), are experiencing significant surges in deepfake cases, indicating that the threat of misinformation is pervasive and escalating.
What factors do you think contributed to the surge in deepfake incidents in APAC?
Lee: Sophistication and accessibility of GenAI tools: The prevalence of GenAI tools has blurred the lines between real and fabricated content, especially with hyper-realistic audiovisual elements. This has led to more deepfakes, which are increasingly difficult to detect.
With sophisticated deepfake tactics, fraudsters and regulators are often playing a game of cat-and-mouse. Although the regulatory landscape against deepfakes is evolving, APAC is still in the early stages of its GenAI regulatory journey. Despite discussing future bills, many other countries in APAC have not established specific laws addressing deepfakes.
Prevalence of social media: Often the breeding grounds for AI-generated misinformation, online media encompassing news websites, streaming services, social platforms, and digital advertising, saw the biggest 274% rise in identity fraud rate between 2021 and 2023.
Lack of regulations: Although the regulatory landscape in APAC is evolving, the current lack of comprehensive laws may have contributed to increased vulnerability to deepfakes in the region. For example, Australia has no dedicated AI laws but relies on voluntary ethical frameworks such as Australia’s AI Ethics Principles and Voluntary AI Safety Standard. While recent proposals have indicated a shift towards regulating high-risk AI applications, gaps remain in enforcement mechanisms.
However, the situation is expected to improve with more APAC countries increasing cross-border collaboration. In particular, ASEAN has published the ASEAN Guide on AI Governance and Ethics in early February 2024, which is applicable to ASEAN member countries and is aimed to facilitate the alignment and interoperability of AI frameworks across ASEAN jurisdictions.
How might the threat of deepfakes evolve in future elections, and what measures can governments and organizations take to protect against deepfakes?
Lee: The threat of deepfakes in future elections could evolve significantly as technology advances, making this issue increasingly complex and concerning. As deepfake creation tools become more sophisticated and accessible, the potential for disinformation and manipulation during elections will intensify.
Moreover, the accessibility of these advanced tools means that anyone can create and disseminate misleading content. This democratization of deepfake technology could lead to a dramatic increase in the volume of deepfake material available, further complicating the information landscape during election cycles.
To protect voters against deepfakes, stringent regulations are necessary. It is possible for the governments of APAC countries to consider enacting laws with stringent and proactive measures.
While digital watermarks have been the most recommended measure against deepfakes, there are several concerns raised around their effectiveness, such as technical implementation, accuracy and robustness.
A multi-pronged approach is necessary to combat deepfakes. This should include technical solutions such as watermarking or digital signatures for authentic content, as well as robust educational campaigns aimed at helping voters recognize and critically assess potentially manipulated media, for example, through training initiatives and mini games, voters can be equipped with the necessary skills to identify AI-generated misinformation.
Moreover, policymakers and organizations can leverage AI for deepfake detection by implementing multi-layered protection throughout all stages of the user journey. Firms can combine various techniques and tools, such as Identity Verification to catch fraudsters during onboarding, as well as leveraging Behavioural Intelligence, Deepfake Detection, and AI-based Event Monitoring and Fraud Prevention Solution that detects and acts against fraud rings, account takeovers etc.
How can social media platforms be more proactive in addressing the spread of deepfakes?
Lee: Often the breeding grounds for AI-generated misinformation, online media encompassing news websites, streaming services, social platforms, and digital advertising, saw the biggest 274% rise in identity fraud rate between 2021 and 2023.
Hence, social media platforms must work closely with policymakers to combat deepfake threats. This includes frameworks that prioritize safeguards and technologies in place to detect distressing content, fake accounts and AI-generated deepfakes to limit the spread of misinformation.
It is imperative to establish and enforce comprehensive regulatory frameworks that impose penalties on individuals and companies who disregard legal and social responsibilities in relation to the dissemination of digital content, along with the use of AI age estimation technology to more effectively safeguard young users from harmful misinformation.
Unfortunately, social media and news platforms lack adequate technology to differentiate between downloaded, generated, or original content, posing a significant risk of enabling the spread of disinformation – as well as risks to both personal reputation and financial security.
Users should prioritize verifying the source of information and distinguishing between trusted, reliable media and content from unknown users. Even content shared by friends or reputable media outlets can be deepfakes, as demonstrated by last year’s notable spoofs, such as the Pope wearing Balenciaga, which duped the media.
What other cybersecurity trends do you expect in the coming year-end and in 2025?
Lee: Soon we will see increased regulatory scrutiny. More governments will impose stricter compliance requirements on organizations for handling and disseminating digital information.
Several APAC countries are already evolving their regulatory frameworks to impose stricter compliance requirements against deepfakes. In South Korea, for example, a revision to the Public Official Election Act was passed by a special parliamentary committee, proposing a 90-day ban on deepfake political campaign videos before elections. In Singapore, a law was proposed to ban deepfakes and other digitally manipulated content of candidates during elections.
With growing regulatory requirements and the complexity of managing data across multiple countries, centralized data management solutions and centralized cybersecurity platform will become more prevalent. These solutions simplify the process of monitoring AI-generated deepfakes and other forms of misinformation, facilitating quicker identification and responses to threats, reducing the time needed to detect and remediate threats.
We expect firms to prioritize Local Data Processing (LDP) infrastructure that allows for the storage and processing of their end-users’ personal data and transactions in certain regions to comply with regulatory requirements.
The data privacy landscape in APAC is rapidly evolving as economies grow and digital transformation accelerates. Recent years have seen a surge in stringent data protection regulations, driven by heightened awareness of privacy issues and the need for greater control over personal data.