Deepfake appears to loom large as a threat, especially with the US presidential elections around the corner. How serious is this threat and what should we be aware of?
Ho: The use of biometrics and facial recognition technology has recently gained recognition from regulatory bodies as the next generation of measures to verify individuals’ digital identities during the onboarding process. This is currently being reflected in many countries’ KYC regulations.
However, many biometrics and liveness detection solutions in use today are inadequate in safeguarding against online identity impersonations, especially with the rise of deepfake technology.
Deepfake technology isn’t just being leveraged to sway public opinion or embarrass political officials, it’s being used to perpetrate online fraud and bypass traditional biometric authentication. Advanced deepfake tools have the ability to transform 2D static ‘selfie’ images of an individual into a high-resolution clip of that person performing movements or pronouncing words in a lifelike manner.
Sometimes liveness detection methodologies ask users to blink, smile, turn/nod, watch colored flashing lights, make random faces, speak random numbers and much more. Sadly, most of these legacy techniques are easily spoofed by deepfakes.
Many enterprises wrongly assume that such fraud attempts only make up a small percentage of the total online onboarding count. This is simply not the case. With the automation of image extraction and simulation of liveness using deepfake technologies, bad actors could automate batched attacks on any business system, potentially resulting in thousands of account openings using fraudulent identities.
What should organizations and individuals be doing to deal with deepfake? What tools and technologies are available for us to defend against deepfake and other similar threats?
Ho: Organizations looking to implement eKYC technologies should prioritize the adoption of certified liveness detection solutions to fortify their defense against fraudulent attempts.
Online verification processes that leverage Artificial Intelligence (AI) to detect human liveness attributes, lighting, and the presence of missing pixels – an indication of a reproduced video – are highly effective in rendering deepfakes ineffective. The application of AI in identity authentication is an area that Jumio has been heavily investing in.
At Jumio, we have also integrated certified 3D liveness detection from FaceTec’s ZoOm® to identify and eliminate many well-documented vulnerabilities in 2D liveness detection methods which render them susceptible to spoofing. FaceTec is the first and only biometric technology to achieve perfect results in Level-1 and Level-2 certification testing. This is noteworthy because the Level-2 test attempts to spoof the technology using live human test subjects wearing realistic 3D masks.
Some common examples of liveness spoofs include:
- Photo attack: The attacker uses someone’s photo, which is printed or displayed on a digital device. Often, for example, a pencil or ruler can be held horizontally and swiped vertically between the photo and the camera to simulate blinking.
- Animated avatar attack: A more sophisticated way to trick the system utilises a regular photo that is quickly animated by software and transformed into a lifelike avatar of the fraud victim. The attack enables on-command ‘puppet’ facial movements (blink, node, smile, etc.) that can look very convincing to the camera.
- 3D mask attack: With this attack, a mask is worn by a fraudster with the eye holes cut out to fool the liveness detection tool. It’s even more difficult to detect this trick than a face video, because in addition to natural eye movements, the fraudster’s face appears to exhibit the same 3D depth as a real human face.
At their core deepfakes are 2D videos, not 3D human faces, so they become relatively easy to discern for a certified 3D liveness detection provider like FaceTec. The computer monitor used to play back the video emits light — it doesn’t reflect it — and certified liveness detection can tell the difference. And, if a criminal attempt to use the deepfake video with a projector onto a 3D head, then the skin texture won’t be quite right, and the advanced certified liveness solution will detect the generation loss, a sure-fire tipoff.
From a consumer standpoint, internet users should pay more attention to the Terms and Conditions of social media and photo-sharing apps to make sure that their biometric data is in safe hands. They are also encouraged to opt for online services that require a stringent onboarding and authentication process.