From cheapfakes to deepfakes, we can expect more people to be duped by scammers using generative AI tools. Enterprises and politics will not be spared…
Generative AI may have helped businesses and individuals become more productive and creative, but one significant drawback is its potential for misuse in creating misleading or harmful content.
For instance, deepfakes – AI-generated videos that can superimpose faces onto other bodies – have been used to create fake news, false endorsements and fraudulent videos.
This misuse can lead to misinformation and disinformation, damaging reputations and influencing political processes. The ease of creating convincing fake content using deepfake technology threatens the authenticity of information, challenging our ability to distinguish between real and AI-generated content.
According to Keeper Security’s latest cybersecurity survey of more than 800 IT security leaders around the globe, AI-powered attacks emerge as the most serious attack vector, followed by deepfake technology.
Sumsub’s annual Identity Fraud Report, found the APAC region has experienced a 1530% surge in deepfake cases from 2022 to 2023, marking it as the second-highest region in this concerning trend.
This coming Valentine’s Day, romance scammers continue to target people looking for love, with social engineering tactics designed to exploit their trust by developing a ‘relationship’ with them. The Singapore Police Force’s Annual Scams and Cybercrime Report 2023 reported that the number of love scams in Singapore has increased year-on-year, with losses in only the first half of 2023 amounting to almost $26 million.
These figures consider only the reported cases, which suggests the actual number could be much higher. For 2024, Proofpoint predicts that cybercriminals will continue to refine their strategies to exploit the human element. Love scams will be no different, especially if enhanced by deepfake.
“I strongly advocate for heightened vigilance when coerced away from established platforms into private conversations, where the protective layers of the initial site are forfeited. Regardless of the involvement of generative AI or deepfakes, the watchword is caution,” said Chris Boyd, Staff Research Engineer, Tenable.
From love scams to business and politics
David Ng, Country Manager, Singapore, Trend Micro, said: “With the huge amount of personal data and content on social media, AI-powered deepfake technology are being used to create convincing audio and video impersonations and deepfakes of individuals, including business leaders, politicians, and celebrities.”
Cybercriminals can use this technology for fraud, social engineering, or discrediting targets without a significant amount of investment. “Attacks can be launched not just by national states and corporations, but also by individuals and small criminal groups,” said Ng. “This means that deepfakes could potentially expand the cybercriminal pool.”
Tenable’s Boyd added: “With a number of major elections coming up in 2024, the possibility of being duped by lies and faked video footage is stronger than ever. Consider that many adults continue to fall for so-called “cheapfakes” (crudely edited photographs and memes on social media), and that in many cases scammers don’t even need to reach for AI tools in the first place to achieve their objectives.”
On the business front, news headlines have recently highlighted a multinational company being scammed out of HK$200 million after an employee in Hong Kong attended a video-conference call with deepfake-created ‘live’ videos of the company’s CFO and other employees.
Trend Micro’s Ng commented: “The wide adoption of AI means that such attacks are becoming easier to execute — which could potentially result in a surge in deepfake scams.”
Shahnawaz Backer, Senior Solutions Architect, F5, believes there would be more: “The malicious use of GenAI extends beyond deepfakes, encompassing sophisticated phishing email campaigns that facilitate ransomware attacks. Furthermore, bad actors exploit GenAI for spreading and finetuning propaganda, leading to potential social unrest. In politics, there were concerns about the use of deepfakes to manipulate speeches or statements, potentially impacting elections or public opinion.”
Watch out!
Of the AI-powered tools that have become progressively more sophisticated, Ng foresees voice cloning being abused significantly in scams in the near future. “This is because voice cloning tools are among the AI tools ripe for hyper-realistic audio and video misrepresentation in real time. Such threats are also likely to remain more targeted as it requires adversaries to collect numerous audio sources from specific individuals to ensure successful AI-driven voice impersonation.”
He added: “We have already seen examples of voice cloning-related scams executed in 2023. One example is virtual kidnapping, a scam in which cybercriminals falsely claim to have kidnapped a loved one. Malicious actors use voice cloning, SIM jacking, ChatGPT, and other AI-enabled tools to identify the most profitable targets and execute their ploy.”
What can organizations do to address this growing threat?
The first thing is to always be skeptical, said Ng. “We always advise our customers to adopt the Zero Trust mindset, and ‘never trust, always verify’, as this encourages proactivity when it comes to their cybersecurity. The same thinking applies here.”
Ng advises users to scrutinize any audio or video content and engagements such as video or voice calls. He pointed out some signals that users should watch out for and best practices they should adopt:
- Watch out for anomalies in the visuals and audio. For instance, atypical facial movements or blinking patterns, whether the audio matches lip movements, glitches or noticeable edits around the face.
- Check and verify the source of the audio, video, or content. For example, if it’s a video from a news channel, they will likely have consistent and standard banners, branding, and a running script of breaking news at the bottom. These are all signs of authenticity.
- For live voice or video calls among colleagues in an organization, it is important to authenticate those involved in the call with three basic factors: something that the person has, something that the person knows, and something that the user is. Ensure that the “something” items are chosen wisely.
- Prioritize the use of biometric patterns where authentications are required and use multifactor authentication (MFA) when possible.
- To address the problem at scale, significant policy changes should consider current and future circumstances, as well as address the use of current and previously exposed biometric data.
“To defend against AI-driven scams, organizations must deploy various measures,” added F5’s Shahnawaz. These measures include:
- Understanding and documenting sensitive transactions, coupled with implementing rigorous verification processes before executing them.
- Educating staff to discern the legitimacy of communications and identify potential scams.
- Utilizing AI-driven detection tools to pinpoint and highlight suspicious activities or messages.
- Securing in-house AI systems with a multi-layered security approach, integrating robust authentication, stringent access controls, and proactive vulnerability management protocols.
In line with the above, organizations can invest in AI-driven detection tools that use algorithms to analyze video content for signs of manipulation, ranging from unnatural facial movements to lighting changes.
“These tools are also constantly updated with the latest deepfake techniques,” concluded Ng. “As AI-driven deepfakes continue to proliferate, it is paramount that companies proactively set appropriate risk and compliance guardrails to protect themselves. Implementing zero trust and establishing a vigilant mindset will be crucial for enterprises to avoid falling prey to such scams.”