Scammers can create increasingly realistic deepfakes, but the key is for us to be cynical of all online materials by default
In 2024 it is very difficult to spot anomalies in deepfake video clips. However, since the most common use of deep fake videos is to call us to action, we can use the context of a video to warn us if a deepfake is involved.
What are the various possible contexts we should be alert for? Let us start with some previously used contexts. First, a person resembling someone we know is now on our smartphone, telling us he needs money urgently, and is appealing to us for help by sending money online.
Next, a local celebrity appears in a pop-up ad in some clickbait news feed, telling us he wants to share a get-rich quick scheme, and we should act fast before the authorities clamp down on the golden opportunity. “The amazing insider tip the banks don’t want you to know!” is an example of the catchy clickbait title.
The most sophisticated example of late, involved scammers in Hong Kong impersonating company directors in a live conference call to convince a finance worker to issue US$25m to some fictitious setup. Using real-time facial mimicry and probably audio deep faking, the scammers probably produced quite-convincing deepfakes using live actors whose faces were replaced with 3D models of the actual directors’ faces.
So, if we cannot spot deepfakes visually, what are the contexts that we should be alert to, to keep our alert levels high at all times?
Some common deepfake contexts
For the following scenarios to kindle our suspicions, the pre-requisite is that we should be alert to video, audio and information fakery at all times, at work or in our personal time online. Once this becomes muscle memory, watch out for:
-
Any video involving local or overseas celebrities promoting any cause: Telltale signs of deepfakes being used would involve any get-rich-quick scheme; some exclusive investment technique or course; or anything involving your urgent financial investment. If disinformation is a motive, you will likely be told tall stories: the key is to spot their call-to-action and verify the claims across multiple news sources and agencies.
-
Product or service testimonials: While deepfakes of celebrities may or may not be used in such advertisements, the point is that good-looking people can be made to spout lies and exaggerated claims about a product or service. This can be combined with images of supposed news articles or online videos of events supporting the advertised claims.
-
Business videoconference compromise: As shown by the financial fraud incident in Hong Kong described above, any organizations should incorporate multiple levels of checks to stop any finance executives from issuing monies without the knowledge of other key personnel. Finance staff should also be taught to issue various identity-challenge queries at video conference participants to verify their authenticity. The query questions must be refreshed regularly, and top executives will need to memorize secret query answers and other physical or verbal gestures for verifying their identity if a deepfake crises occurs.
-
Appeals for donations, ransoms and other crises:In the early days of smartphone deepfake video calls, millions of dollars were lost in fake kidnap scenarios. Now, the contexts in which to use deepfakes have expanded to include: complex financial/investment scams, election campaign influencing, misinformation/disinformation campaigns, and slow, long-drawn psyops campaigns to create social/emotional mind manipulation. Whenever we are viewing social media messages with powerful video content, we should adopt a critical, skeptical mindset. Nothing should ever be able to influence us to do anything rash, illegal or financially or socially questionable — until we have allowed ourselves enough time to triple-verify the sources of the claims.
-
Voice fraud: Without having to fake visual information, scammers can devote more time to lure their victims over the phone using just deep-faked audio. Many of the use cases of deepfake videos can also involve just voice fraud. Unsolicited calls supposedly from your loved ones, supposed colleagues and long-lost friends should be treated with extreme caution:
-
✔ Do not allow unknown callers from prompting you with questions to get your responses recorded down for voice-printing. It takes only 10 seconds of recording your voice uttering common words, to get a usable imprint of your speaking hundreds of other words!
-
✔ Never act alone on any of the information conveyed in a possible audio deepfake scenario. Inform others in your close circle of contacts, call up the people who were possibly being deepfaked, and whatever you do — never send money or other assets to anyone until due diligence has been thoroughly exhausted!
-
Honey trapping conversations: Cybercriminals can use deepfakes to start a romantic online relationship to gain personal information and trust from their quarry before launching sexploitation, blackmail or other attacks. If you are unable to resist online romantic relationships, follow the mnemonic CYNICAL at all times:
Caution and circumspect at all times
Your personal privacy above all
Notice every red flag or inconsistencies in the person’s behavior or story
Investigate via background research
Consult and seek help from your close circle of friends to stay objective, sober
Avoid impulsivity no matter how convincing your chat partner is
Limit your financial exposure
As generative AI can now make vishing and video/audio deepfaking even more dynamic and realistic in real-time conversations, keeping cynical and wary online and offline is key to our safety. Cybercriminals and scammers are constantly finding ways to appeal to your raw emotions; limit your response/research time; and pressure you into making mistakes (especially if you appear guarded).
The key is to always try to stay calm, appear to play along, but in the meantime raising the alarm at people around you; and avoid the disclosure or disbursement of anything of value. Stay safe!