The way to thwart ultra-realistic fake videos is to fight fire with fire, says this biometrics expert.
If your CEO instructs you on a video call under unusual circumstances, you would not consider it a scam, right? After all, with the rise of mobile payment and digital banking, receiving a transfer request via a video call from your company boss is as legitimate as it gets, compared to phishing emails and even voice calls from the same boss…
That bad news is that, manipulated videos called deepfakes are so realistic and believable now that that person on your laptop screen may not be your CEO at all, even if his lip movements synchronize perfectly with his words.
The use of such deepfake videos had nearly doubled in the first six months of 2020, and AI and machine learning technology would have grown that number further by now. Just look at how deepfakes have become more sophisticated. From merely replacing faces on pre-recorded videos—such as Obama’s “public service announcement” or Mark Zuckerberg’s unconstrained “speech” about privacy—they have now evolved to allow cybercriminals to impersonate others in live video feeds, in real time.
So, how can businesses find ways to circumvent this rising threat and defend themselves against deepfakes?
Fighting tech with tech
While carrying out high-risk transactions involves many safety and identity-authentication measures, deepfakes could easily be used to spoof the identities to pass verification processes under certain circumstances.
To mitigate this threat, businesses must review online processes, bolster cybersecurity and invest in the right kinds of technologies that can detect and intercept deepfakes.
For instance, many identity verification providers have started introducing ‘liveness’ detection, which allows companies to determine and confirm the user’s physical presence behind an app. Several types of liveness detection technologies are available, and different providers will use varying technologies and algorithms. However, not all liveness detection is created equal. Legacy techniques that ask users to make facial movements like blink, smile, turn, nod, speak and more, can now be easily spoofed by deepfake technologies.
This is where ‘certified liveness’ detection comes into play. Certified liveness detection goes through rigorous testing to ensure that advanced spoof attempts can be thwarted.
How do they do this? Since deepfakes are 2D videos (and not 3D human faces), AI can be used to pick up on their non-human traits. For instance, the screen that is being used to play a deepfake video emits light (rather than reflect it), and certified liveness detection can tell the difference.
AI can also be used to detect instances where someone’s skin texture looks ‘off’, or if there has been any reduction in video quality due to multiple copies being made—a telltale sign of video manipulation.
For hackers to be able to sneak past certified solutions, they would need to invest in expensive, bleeding-edge technologies, such as look-alike 3D animatronic puppets that could also exhibit life-like gestures and natural reactions to the environment. This includes a combination of reflections in the eyes, pupil reactions and a variety of very subtle movements.
All this requires a much bigger investment—not only in time and effort, but also in huge technology developments. Even if the investment is made, certified solutions may still be able to detect small differences.
Proving liveness in nextgen biometrics
This is why proving liveness through certified solutions is becoming particularly important for businesses or organizations to address changing security needs.
Considering the financial and reputational impact that deepfake scams can have, we can expect other critical sectors to follow suit in adopting a more stringent approach towards biometric-based identity verification and authentication.
Only when the specter of ever-improving deepfakes is under control, will organizations be able to re-establish the chain of trust with their customers in virtual settings.