Such content groups may have attracted a million followers with legitimate content, but their ultimate goal spread malware/phishing scams.
To cash in on the craze to try out generative AI chatbots, cybercriminals are creating Facebook pages or social media groups with engaging tips and content to gain trust before planting phishing to cash in.
The phishing link would offer downloads of unofficial/non-existent versions of generative AI chatbots such as Bard New, Bard Chat, GPT-5, G-Bard AI and others. Some posts and groups also try to take advantage of the popularity of other AI services, such as Midjourney, Jasper AI or other related services and tools.
These groups may have already attracted a huge following with innocuous content beforehand, so their followers often have no idea that scammers are behind the content. In fact, users in the groups often passionately discuss AI in the comments and like/share the posts, which spreads the reach of the malevolent content even further.
However, once users inadvertently download something phishy, they may end up infecting their device with malware such as Doenerium, an open-source infostealer.
The malware uses multiple legitimate services such as Github, Gofile and Discord as a means of command and control communication and data exfiltration. Thus, a Github account is used by the malware to deliver a Discord webhook, which is then used to report to all the information stolen from the victim to the actor’s Discord channel.
Additionally, the malware can target cryptocurrency wallets including Zcash, Bitcoin, Ethereum, and others. Furthermore, the malware steals FTP credentials from Filezilla and sessions from various social and gaming platforms. Once all the data is stolen from the targeted machine, it is consolidated into a single archive and uploaded to the file-sharing platform Gofile.
It all looks legitimate, too
Another campaign exploiting the popularity of AI tools uses a “GoogleAI“ lure to deceive users into downloading the malicious archives, which contain malware in a single batch file, such as GoogleAI.bat. Similarly to many other attacks like this, it uses open-source code-sharing platform, this time Gitlab, to retrieve the next stage of attack. The final payload is located in a python script called libb1.py. This is a python-based browser stealer that attempts to steal login data and cookies from all of the major browsers, and the stolen data is exfiltrated via Telegram.
Check Point Research, which disclosed the findings above, has also recently uncovered many sophisticated campaigns that employ Facebook ads and compromised accounts disguised, among other things, as AI tools. These advanced campaigns introduce a new, stealthy stealer-bot ByosBot that operates under the radar. The malware abuses the dotnet bundle (single-file), self-contained format that results in very low or no static detection at all. ByosBot is focused on stealing Facebook account information, rendering these campaigns self-sustaining or self-feeding. The stolen data may subsequently be utilized to propagate the malware through newly compromised accounts.
Unfortunately, authentic AI services make it possible for cybercriminals to create and deploy fraudulent scams in a much more sophisticated and believable way. Therefore, it is essential for individuals and organizations to educate themselves, be aware of the risks and stay vigilant against the tactics of cyber criminals.