The move comes in advance of the city state’s next general election, due by late November next year
On 15 Oct, the Singapore Parliament passed the Elections (Integrity of Online Advertising) (Amendment) Bill to ban the use of deepfakes and other digitally manipulated content by candidates of any election during the campaigning period.
The ban period covers the time when a writ of election is issued in the country, and will last until the end of polling, contingent on four conditions, namely — content that is digitally generated or manipulated; and comprises advertising intended “to promote, procure or prejudice the electoral prospects of a party or candidate”; depicts a candidate saying or doing something that he or she did not say or do; and also, the materials are “realistic enough that some members of the public who see or hear the content would reasonably believe that the candidate did in fact say or do that thing”.
The new law does not cover animated characters and cartoons generated by AI, content that is sufficiently labeled as entertainment (satire), or footage that has been enhanced with digital cosmetic filters and lighting adjustments.
Positive public reactions
Observers in the media have lauded the timely move, noting how the public can be misled if election candidates or anyone online are allowed unfettered use of AI to create fake news and misinformation during the election period.
From the cybersecurity perspective, experts have emphasized the growing threat that AI-generated deepfakes pose to election integrity. Check Point Software’s spokesperson highlighted how sophisticated AI tools can be used to create hyper-realistic deepfakes, which may mislead voters and undermine trust in the democratic process and prevent misinformation.
Additionally, experts from Jumio had in July 2024 pointed out that an international survey had shown that deepfakes could erode public trust, with 83% of Singaporean respondents expressing concern that such content could influence election proceedings.
Said Keeper Software’s CEO and co-founder, Darren Guccione: “To identify potential deepfakes, voters should look for subtle inconsistencies in the content. Signs of manipulation may include mismatched facial expressions, unnatural speech patterns or irregularities in the synchronization of audio and video. Additionally, verifying the source of the content, and cross-referencing with reliable information can help verify its authenticity.”
Other views online
From a technical standpoint, people have commented that there may be hurdles to how the country’s authorities could reliably and identify indisputably manipulated content in a timely manner, given the rapid evolution of AI technologies. Industry observers have suggested that enforcing these regulations may require sophisticated detection tools, but the criteria for defining “manipulation” may not always be clear-cut.
Alternative views have also been cast, with global free speech advocates warning that the legislation could be used to suppress political dissent. They argued that the law could discourage legitimate criticism of candidates or political discourse during elections for fear of repercussions if their campaign content is construed as bordering on manipulation. According to them, such concerns are exacerbated by Singapore’s already strict regulations under the Protection from Online Falsehoods and Manipulation Act (POFMA), which has been critiqued for giving the government substantial power to determine what constitute falsehoods in online content. Other groups have cautioned that this could create an overly restrictive environment for political communication.