With massive surges in deepfake and deepfraud activity raging, one firm has contributed some of its digital arsenal to the world.

According to one firm’s own data, the first half of 2023 had seen a huge rise in deepfake cases worldwide, as compared to the second half of 2022. AI-generated identity fraud was noticed most in the Asia Pacific region (APAC) where Australia (1,300%), Vietnam (1,400%) and Japan (2,300%) chalked the highest numbers of attacks.

Similarly, the number of deepfakes had increased by 84% in Great Britain, by 250% in the US, by more than 300% in Germany and Italy; and by 500% in France.

The firm, Sumsub, has announced that it has developed a set of models that enable the detection of deepfakes and synthetic fraud in visual assets. This models will be publicly accessible for download and usage by the AI community (including developers, AI researchers and scientists) to test and experiment on, and explore innovative ways to tackle the escalating threat of deepfakes.

Using the set of four distinct machine learning-driven models for deepfake and synthetic fraud detection, users will be able to estimate how likely an uploaded image was created artificially. Following the guidelines established by the AI community, Sumsub provides detailed cards — comprehensive documentation that describes the datasets and performance metrics of their AI models.

Sumsub’s internal testing indicates that its AI models are effective in accurately detecting typical image alterations. Moreover, when these models are used alongside alternative methods of content analysis, AI-generated images can be more confidently recognized.

Following this initial contribution, Sumsub will leverage feedback from the AI-research community to further improve the models’ capabilities, allowing the platform to adapt and grow in tandem with other AI-driven tools.

Said Pavel Goldman-Kalaydin, Head of AI/ML, Sumsub: “As AI technologies advance, we foresee a tightening of regulations governing their use. For example, it may soon become mandatory to apply watermarks to all synthetic images. However, fraudsters will continually seek ways to overcome regulations, especially leading up to the 2024 US presidential elections.”

The firm hopes that the ML models supplied to the global AI community can serve as a foundation for further development in the battle against AI-generated fraud.