Why slog all your life as a harried worker when generative AI tools can make you a celebrated ransomware coder overnight?
Back in January this year, Check Point Software researchers had already been observing cybercriminals abusing ChatGPT for their misdeeds.
There was a case of a malware coder using the generative AI chatbot to recreate malware strains. As an example, he shared the code of a Python-based infostealer created after describing to the chatbot the functions of an existing malware with the same characteristics!
In another example, someone in the underground community who was not a developer and had limited technical skills was able to use the chatbot to create the first Python script he had ever created—and the code was for a multi-layer encryption tool that could potentially turn into ransomware by any cybercriminal.
When researchers asked ChatGPT about how cybercriminals can abuse its power, the chatbot replied:
“It is not uncommon for threat actors to abuse the use of artificial intelligence and machine learning to carry out their malicious activities,” and concluded that “OpenAI itself is not responsible for any abuse of its technology by third parties.”
ChatGPT or ThreatGPT???
Once assimilated, knowledge is virtually impossible to “remove” from generative AI’s intelligence. This means that security mechanisms focus on preventing collecting or revealing certain types of information process, rather than eradicating knowledge altogether.
That is why cybercriminals are continuing to enroll the help of generative AI to ease their workloads and threaten the world—in four common ways:
- Doing more with less: One of the biggest advantages that AI (or generative AI in particular) has brought to cybercriminals is its ease of use, which allows more users to carry out malicious activities including would-be cybercriminals who may not have had the skills to begin with. New small groups of cybercriminals are continuing to emerge that are capable of more sophisticated cyberattacks easily: meaning, cyber threats could escalate beyond our wildest imagination soon.
- Improving threat evasiveness: Cybercriminals were not always good at content creation, but the latest generative AI tools can now help them generate phishing content that is very difficult to detect. Moreover, the autonomous learning models of these tools allow them not only to create the broadest range of malicious content, but also make it convincing, interactive, and difficult to detect as spoofs and fakes.
- Automating attacks: AI has already led to a significant increase in the use of bots and automated systems to carry out online attacks, allowing cybercriminals to be more successful. Cybercriminals can use AI-powered botnets to launch massive DDoS attacks to overwhelm their targets’ servers and disrupt their services. With added help from generative AI, the creation and management of DDoS campaigns will no doubt be easier and even more resistant to cyber defenders’ defense mechanisms.
- Strengthening codebase resilience: When developers of generative AI realized that bugs in their software was allowing hackers to gain unfettered access for free, they imposed new restrictions and removed the bugs. However, despite their fast response, cybercriminals have found fast ways around restrictions. Earlier this year, Check Point Software research teams reported how cybercriminals were already distributing and selling their own modified ChatGPT APIs on the Dark Web. Also, threat actors have now learned to collectively leverage the power of AI to steal and sell premium accounts to AI tools, strengthen malicious code bases, and quickly use AI power to defeat any and all future restrictions imposed by the world.
Of course, naysayers of the “AI Armageddon” predictions can argue that AI/ML are also the two main pillars that help cybersecurity capabilities improve. The fact is that the degree of complexity and dispersion of the current corporative systems make traditional and manual monitoring, supervision and risk control insufficient.
However, according to Check Point, to mitigate the risks associated with advanced AI, it is important that researchers and policymakers work together to ensure that these technologies are developed in a safe and beneficial way.