Can the island-state serve as a global benchmark for election security as it gears up for national elections in May 2025?
AI has supercharged the internet and social media. As much as new information becomes available, so do disinformation and deepfakes. This is grim for governments and citizens, especially in an election year.
For instance, India’s general elections in 2024 offered a glimpse of how politicians could innovate with AI-generated videos to enhance their reach, while opening up the question of manipulative disinformation via deepfakes.
In a fraud exposure survey in 2024, 85% of APAC respondents were concerned about deepfakes, expressing fear about their future impact on elections. Nations like South Korea (1625%), Indonesia (1550%), and India (280%) have seen increases in deepfake incidents that far surpass the global average, revealing a concerning trend that threatens to distort political discourse and voter perceptions.
In Singapore, while some progress has been made with a law passed in October 2024 banning deepfakes of candidates during the election period, is this enough to address the challenge of disinformation, especially during such a critical period for Singaporeans?
We find out more from a quick Q&A with Chester Wisniewski, Director, Field CTO, Sophos.
With the rise of genAI in the Asia Pacific region, are social media platforms doing enough to detect and combat deepfakes? How can we enforce deeper accountability from these platforms?
Chester Wisniewski (CW): No, social media platforms are not doing enough, and there isn’t anything we can likely do about this problem.
Ideally, we want the large commercial AI providers to curb abuse, but with models being made freely available it is impossible to prevent abuse as people wishing to bypass the limitations will simply run the models themselves, without the guardrails. The “cat is out of the bag” as they say, and there is no going back now.
How serious is the threat of voice/audio cloning? Is this the next disinformation tool we should be wary of? Are there other tools or techniques we should be mindful of?
CW: Voice cloning technology is now widely available, with a passable rate of authenticity. We need to be very wary and bolster our authentication processes to take this into account. It is no longer costly nor time-intensive to create a voice clone making it widely available with a low barrier to entry. This is not yet true of video clones, which are much more expensive and complex to create, but we should be prepared for the time when it is as simple to do as audio has already become.
We have not quite reached the ability to voice clone in real-time, but I expect we will have this ability before too long. The current abuses have all been financially motivated, so the best approach is to always verify the identity of someone asking for a wire transfer, crypto investment, or other money transmission using a different method of communication using contact methods you have previously established.
Can deepfakes really affect the outcomes of polls in general, and for Singaporeans in particular as they head to a general election this year?
CW: People should only rely on information from verified sources rather than social media. It is tempting to trust content shared by your friends and loved ones on social platforms, but these tools are being weaponized at scale and can fool even those you trust.
If you see a video or audio clip that makes you angry or confused about the message you should search news outlets for coverage. Most media organizations will point out fakes that have achieved popularity and if something is truly controversial would confirm the authenticity with experts or eyewitness accounts.
How can the law and regulations keep up with disinformation, especially during election periods?
CW: I cannot comment on legal strategies, but the law needs to be crafted with the understanding that this technology exists, is accessible, and it unlikely to be eliminated.