The tool injects synthetic videos into identity checks, bypassing physical cameras to enable sophisticated digital impersonation and fraud schemes
A newly identified tool capable of injecting deepfake content directly into compromised iOS devices highlights the growing sophistication of digital identity fraud techniques.
The tool operates by exploiting jailbroken Apple devices (fully rooted) running iOS 15 or later, enabling attackers to bypass normal security protections and insert manipulated video streams into applications, particularly those using biometric authentication.
Unlike traditional spoofing, which involves showing a falsified video or image to a camera, the technique manipulates the underlying video feed. The process involves:
- connecting a compromised iPhone or iPad to a server controlled by the attacker
- channeling synthetic media into the device, and can include face-swapping videos or motion re-enactments in which a victim’s still image is animated with another person’s movements
- circumventing the physical camera input, using a special method to insert synthetic media (such as deepfake content) into applications such as biometric verification systems, to mislead applications on the device into treating the injected deepfake as a live video feed
The scale and automation potential of this method make it especially concerning for identity verification systems with weak or absent biometric safeguards. Successful use of the tool could allow fraudsters to impersonate legitimate users or manufacture synthetic identities capable of navigating digital verification checks.
Researchers have noted that the tool is suspected to have been developed in China, although attribution could not be independently confirmed. Its emergence, however, comes as states and businesses remain concerned about the national security implications of advanced identity fraud tools and the broader risks tied to manipulated video and generative AI.
The discovery, announced on 18 September 2025 by iProov, underscores the inadequacy of single-layer verification methods, which can be more easily bypassed by increasingly advanced injection techniques. Their analysts stress the importance of multi-layered defenses, combining biometric checks with real-time liveness detection and continuous monitoring, to detect synthetic media before it can compromise authentication systems.
According to their chief scientist Andrew Newell: “The tool’s suspected origin is especially concerning and proves that it is essential to use a liveness detection capability that can rapidly adapt,” alluding to a possible shift towards “industrialized attacks”.