Adversaries to generate deepfakes to bypass facial recognition

Steve Povolny, Head of McAfee Advanced Threat Research: Computer-based facial recognition, in its earliest forms, has been around since the mid-1960s. While dramatic changes have since taken place, the underlying concept remains: it provides a means for a computer to identify or verify a face.

There are many use cases for the technology, most related to authentication and to answer a single question: is this person who they claim to be?

The pace of technology has brought increased processing power, memory and storage to facial recognition technology. New products have leveraged facial recognition in innovative ways to simplify everyday life, from unlocking smart phones, to passport ID verification in airports, and even as a law enforcement aid to identify criminals on the street.

One of the most prevalent enhancements to facial recognition is the advancement of artificial intelligence (AI). A recent manifestation of this is deepfakes, an AI-driven technique producing extremely realistic text, images, and videos that are difficult for humans to discern real from fake. Primarily used for the spread of misinformation, the technology leverages capabilities.

Generative Adversarial Networks (GANs) are a recent analytic technology that, on the downside, can create fake but incredibly realistic images, text, and videos. Enhanced computers can rapidly process numerous biometrics of a face, and mathematically build or classify human features, among many other applications. While the technical benefits are impressive, underlying flaws inherent in all types of models represent a rapidly growing threat, which cyber criminals will look to exploit.

As technologies are adopted over the coming years, a very viable threat vector will emerge, and we predict adversaries will begin to generate deepfakes to bypass facial recognition. It will be critical for businesses to understand the security risks presented by facial recognition and other biometric systems and invest in educating themselves of the risks as well as hardening critical systems.

The deepfake battle against biometrics

Frederic Ho, Vice President, APAC, Jumio Corporation: The use of biometrics and facial recognition technology has recently gained recognition from regulatory bodies as the next generation of measures to verify individuals’ digital identities during the onboarding process. This is currently being reflected in many countries’ KYC regulations.

However, many biometrics and liveness detection solutions in use today are inadequate in safeguarding against online identity impersonations, especially with the rise of deepfake technology.

Deepfake technology isn’t just being leveraged to sway public opinion or embarrass political officials; it’s being used to perpetrate online fraud and bypass traditional biometric authentication. Advanced deepfake tools have the ability to transform 2D static ‘selfie’ images of an individual into a high-resolution clip of that person performing movements or pronouncing words in a lifelike manner.

Sometimes liveness detection methodologies ask users to blink, smile, turn/nod, watch colored flashing lights, make random faces, speak random numbers and much more. Sadly, most of these legacy techniques are easily spoofed by deepfakes.

With the automation of image extraction and simulation of liveness using deepfake technologies, bad actors could automate batched attacks on any business system, potentially resulting in thousands of account openings using fraudulent identities.

Some common examples of liveness spoofs include:

  • Photo attack: The attacker uses someone’s photo, which is printed or displayed on a digital device. Often, for example, a pencil or ruler can be held horizontally and swiped vertically between the photo and the camera to simulate blinking.
  • Animated avatar attack: A more sophisticated way to trick the system utilizes a regular photo that is quickly animated by software and transformed into a lifelike avatar of the fraud victim. The attack enables on-command ‘puppet’ facial movements (blink, node, smile, etc.) that can look very convincing to the camera.
  • 3D mask attack: With this attack, a mask is worn by a fraudster with the eye holes cut out to fool the liveness detection tool. It’s even more difficult to detect this trick than a face video, because in addition to natural eye movements, the fraudster’s face appears to exhibit the same 3D depth as a real human face.

Organizations looking to implement eKYC technologies should prioritize the adoption of certified liveness detection solutions to fortify their defense against fraudulent attempts.  

Online verification processes that leverage AI to detect human liveness attributes, lighting, and the presence of missing pixels – an indication of a reproduced video – are highly effective in rendering deepfakes ineffective.