Deepfake technology and generative AI provide cybercriminals with easily accessed tools for sophisticated spoofs and attacks.
In a recent survey by UK-based biometric authentication technology company iProov, it was found that there was a staggering 704% increase in face swap attacks between the first half (1H) of 2022 and 1H 2023.
With generative AI tools offering easy access and increasingly sophisticated tools to create fake identities and deepfake, face swap attacks can only be expected to grow.
CybersecAsia had a conversation with Dominic Forrest, Chief Technology Officer, iProov, on this subject.
With deepfake becoming more sophisticated, facial verification systems are vulnerable to spoofing attacks. How can facial verification methods help organizations ensure they are verifying a live person and not a pre-recorded video or image?
Forrest: Threat actors today are not just more sophisticated, but also increasingly persistent. Attack sequences typically last more than 60 days, with multiple threat actors engaging in attack sequences for periods extending over six months.
This is why liveness detection is crucial because it ensures that an online user is a real person. Liveness detection essentially differentiates between genuine humans and spoofs. However, not all liveness detection is created equal. Most liveness detection technology can detect a presentation attack – the use of masks or recorded sessions played back to the device’s camera attempting to spoof the system, and could also include a deepfake video held in front of a camera.
These do not guarantee protection against a digital injection attack that bypasses the device’s camera to inject synthetic imagery into the data stream. Here’s where technology that uses controlled illumination comes in, as it creates a one-time biometric that cannot be recreated or reused and detects whether the person verifying is the right person, a real person and is detecting in real-time, therefore providing greater anti-spoofing protection across a range of attacks.
How do organizations ensure that users onboarding or authenticating on their platform are the right person and a real person, authenticating right now?
Forrest: Resilient identity verification is needed to ensure a remote individual’s identity, content and credentials are genuine. This involves binding an individual to their trusted government-issued ID. Non-negotiables are threat intelligence, security operations centers (SOCs) specializing in biometrics, and cloud-based solutions to quickly detect and respond to evolving threats.
Implementing liveness detection and analysis across company processes is a crucial measure for all organizations. The liveness solution will examine signals from the imagery and device for signs of spoofs presented to the camera or injected into the system. A robust face verification solution not only detects liveness but will also detect and synthetically created images such as deepfakes.
It is also vital that face-capture processes are simple, opt-in, and free from complex instructions or cognitive challenges and should be certified against the WCAG 2.2 accessibility standard at level AA
Malicious actors are increasingly using AI-powered tools for biometric attacks. How can governments and organizations leverage AI to develop countermeasures and strengthen their biometric authentication systems against such threats?
Forrest: Continuous monitoring and understanding of evolving threats are key to developing countermeasures. For instance, multi-frame challenge response liveness detection coupled with a cloud-based SOC empowers detection and prevention of generative AI-powered attacks, deepfakes, face swaps, and metadata manipulation techniques.
In addition, combining AI-powered, multi-frame liveness detection with multi-factor authentication (MFA) bolsters defences.
As AI technology continues to advance, so will the capabilities of threat actors. What strategies can governments and organisations implement to stay ahead of this evolving “arms race” in the realm of AI-based biometric authentication?
Forrest: Firstly, organizations need to recognize that no verification system is unbreachable, especially manual or basic AI ones. Synthetically generated media is readily used to fool both humans and weak security systems.
To the point about AI being leveraged by threat actors, organizations need to fight fire with fire, so to speak. Deliberate and shrewd investment, such as in advanced AI-powered biometrics empowers robust defences as it enables analysis of multiple frames throughout video feeds.
In the same vein, organizations must also do away with old standards. Don’t rely on Presentation Attack Detection (PAD) certification alone. Perform your own testing or employ experienced biometric Red Team testers. Collaboration with governments and experts can also help here as it fosters knowledge sharing and can lead to more effective solutions
In adopting solutions, organizations should prioritize flexibility. This is crucial to avoid being constrained by rigid processes and ensure the prompt implementation of countermeasures against emerging threats.
Finally, an often overlooked aspect in all this is inclusion and user experience. Advanced security is crucial, but should not lead to complex or exclusionary user experiences that cause drop-off.
How can organizations effectively educate themselves when facial recognition in videos can be compromised?
Forrest: It is true that the remote identity verification threat landscape is less understood compared to other cybersecurity domains, such as ransomware and phishing. While most biometric vendors track metrics like pass/fail rates, they cannot continuously observe live attack attempts because they do not deploy a cloud delivery model and a dedicated biometric SOC.
The imperative, then, is to undertake a security mindset overhaul. Simply put, organisations need to pivot away from a reactive, after-the-fact security posture, to one equipped with real-time visibility into their attack surface.