Using fake ads, counterfeit websites, and malicious remote-access tool executables disguised as AI-generated output, cybercriminals waste no time exploiting popular platforms.
A major cyberattack campaign targeting users of the popular generative AI platform Kling AI has been uncovered by cyber threat researchers, highlighting the growing risks associated with the rapid adoption of AI tools.
Since early this year, threat actors have been observed to be leveraging the GenAI platform’s popularity to distribute malware through elaborate social engineering tactics.
The attack began with around 70 fraudulent Facebook ads and pages that closely mimicked Kling AI’s branding. Unsuspecting users who interacted with these ads were redirected to a counterfeit website designed to replicate the legitimate platform’s interface. From there, users were prompted to upload images or generate AI content, after which they were offered a download purportedly containing their requested media file.
However, the downloaded files, presented as harmless images or videos, were actually Windows executables disguised using filename masquerading techniques. Further, attackers obscured the files’ true nature by employing double extensions and UTF-8 encoded Hangul Filler characters, making them appear as standard media files even in file explorers.
Once executed, the malware established persistence on the victims’ devices, evaded security analysis, and deployed a second-stage payload: a remote access trojan (RAT) known as PureHVNC. This allowed attackers to remotely control infected devices, steal sensitive information, and maintain long-term access. The malware also targeted web browsers and extensions to harvest passwords and other personal data.
According to Check Point Research (CPR) analysts who disclosed their findings, evidence suggests links to Vietnamese threat actors, as similar campaigns have previously used Vietnamese-language code and Facebook-based malvertising tactics.
This incident underscores a broader trend: cybercriminals are increasingly exploiting the popular AI services as a vector for their sophisticated malware and social engineering schemes to compromise personal and organizational security. As AI adoption accelerates, the public will need remain vigilant against such deceptive campaigns, and verify the authenticity of platforms before downloading files or providing sensitive information.