Cybercriminals are bound to exploit AI defenses with data poisoning and other tactics, and attack human lapses and over-confidence
When it comes to AI in cybersecurity, the technology is casting doubt over existing security tools, and the current concern is that it can be used to facilitate the evasion of these tools.
A question many in the industry are asking themselves is, should we fight fire with fire?
However, in this article I will cover some of the latest emerging AI attack trends, and why the answer to AI threats is not more AI.
Emerging AI threats and tactics
One very effective attack method is “phone-home morphing”. Once malware has successfully penetrated the target network, this technique uses an API to call back to an AI tool to report its progress and receive updates to help it move forward.
So, ransomware blocked by an endpoint detection and response tool will “phone home” to explain what stopped it, and then receive an update on how to overcome the obstacle. This happens repeatedly until the malware succeeds.
Even more dangerous than this is the concept of a self-generating polymorphic code. Rather than calling back to base, such AI malware can learn from its environment independently, and adapt its tactics to “live off the land” and progress its attack. The approach is currently too resource-intensive to be viable, but it is only a matter of time while computing power advances.
Alongside the threats from AI, there are also threats to AI, known as AI poisoning. This is where bad actors manipulate AI tools that identify patterns and trends and mine information. By poisoning this data with false information, it is possible to trick AI into learning the wrong lessons. This could mean deceiving systems into thinking malicious activities are benign, enabling attackers to go unnoticed.
Fighting fire with fire
Combatting AI threats with AI has become one of the most common responses to emerging threats. What better way to counter an inhumanly fast threat than with an equally dynamic, defensive AI?
But, while AI undoubtedly has its place in the security tech stack, relying entirely on this approach to combat new threats is a mistake. The ability for adversaries to poison and subvert defensive tools means that there is always a risk that AI-powered security solutions will be tricked into overlooking malicious activity.
Wider deployment of AI threat detection means more opportunities for threat actors to understand how tools work, and then counteract them. As such, AI should be used judiciously, just as we use antibiotics with caution when fighting infection.
The best strategy is to limit the impact of AI-powered attacks by tightly controlling the environment they can access.
Limiting the learning surface
Reducing the attack surface is already a mainstay security strategy for keeping attackers out. Now we also need to think in terms of limiting the “learning surface” available to offensive AI tools already within the network.
Blocking invasive malware from accessing resources means the AI behind it will have less opportunity to learn, adapt, and progress the attack.
One proven strategy for doing so is breach containment. This focuses on limiting and containing how malicious actors can spread through the network using micro-segmentation. Rather than trying to outpace and catch an intruder, the threat is halted in its tracks until it can be eliminated. This has a knock-on effect of improving incident recovery, as the impact radius of an attack is far more limited.
The problem is, traditional network segmentation approaches do not provide the control and agility needed to fight AI-powered threats. They offer no ability to change security rules per asset, based on status and context, which makes it increasingly difficult to keep up.
In order to come up against the burgeoning AI threat, we need a step change in security: one that moves away from the static, network-based cybersecurity approaches of the past, to a more dynamic approach that applies security controls on a much granular level — based on risks identified.
We must restrict the ability of an AI attack to learn about the defenses and systems, thereby reducing the effectiveness of any attack.
Using a more dynamic approach, organizations can respond and recover more quickly in the event of any AI-powered breach, without having their systems shut down in the interim.