Understanding how generative AI can be used in cyber-attacks paves the way to effectively defend against them.

The use of AI, particularly generative AI tools such as ChatGPT, is rapidly changing our society. It now can help with a range of tasks; from students’ homework to providing investment advice, as well as enhancing selfies into elaborate paintings and even writing code. While these all hold potential benefits for many of us, there are – rightly – concerns about how generative AI can be used in cyber-attacks.

It is critical to better understand how new AI-powered attack methods impact the changing threat landscape so that society as a whole can implement safety plans, as well as aid the creation of novel defensive advances.

Vishing 

More people are now on the lookout for phishing emails due to increased education efforts by enterprises and government agencies. But there are still scenarios where one’s instinct overrides caution.

For example, an employee receives a WhatsApp voice message where his or her company’s head is requesting a transfer of money. If the WhatsApp account is using the profile picture of the latter and the voice in the message sounds just like the boss, the employee will likely be tricked into obeying.

Attackers can use AI text-to-speech models to craft convincing messages to deceive targets and gain access to confidential data. By mining publicly available information, they can impersonate celebrities, company executives and even government officials. Text-to-voice models can then use these to create a believable persona.

To make matters worse, these models can be automated to conduct VoIP phishing – or vishing – campaigns at scale.

Biometric authentication 

Facial recognition technology is similarly vulnerable to generative AI-assisted attacks. Researchers from Tel Aviv University have demonstrated that facial recognition authentication can be bypassed by using models called Generative Adversarial Networks (GANs) to create a “master face” or “master key”.

This vector image was tested and optimized to match facial images in a large, open repository. The research produced nine images that had a 60% chance of successfully matching faces in the database, making it possible for threat actors to compromise identities.

Generative models are not a recent innovation. But recent advancements in AI technology have enabled threat actors to take advantage of them at a much larger scale. This is due to the exponential increase in parameters – variables in an AI system whose values are adjusted to establish how input data gets transformed into the desired output.

As a result, these parameters can be processed by AI models through cloud computing and new digital environments.

With AI models continuing to become more sophisticated, they will be increasingly capable of creating realistic deep fakes, malware and other dangerous threats that could significantly alter the security landscape.

Polymorphic malware 

While the usage of generative AI is being explored by researchers and developers, cyber-attackers are also utilizing it to create malware, conduct reconnaissance and gain initial access during the early stages of an attack chain.

However, it is still uncertain if AI tactics are effective when an intruder has already gained access and is attempting to escalate privileges, acquire credentials or move laterally. To prevent these dangerous threats from succeeding, robust and intelligent identity security measures must be implemented to protect essential systems and data.

AI-powered defense against attack innovation

These attack scenarios underscore the fact that AI is already having a major impact on the security landscape and will continue to do so in the future. It provides attackers with more opportunities to target identities and circumvent authentication protocols.

Organizations must take stock of the consequences of failing to safeguard identities, which the CyberArk 2023 Identity Security Threat Landscape Report found to be the most successful method of infiltrating systems and accessing sensitive data.

Implementing malware-agnostic defenses can help protect against malicious activity, while preventive measures such as enforcing least privilege access should also be on the agenda.

AI is a powerful tool for cybersecurity teams as well and can be used to combat changing threats, increase agility, and keep organizations ahead of attackers. By using AI to reinforce security measures around human and non-human identities, organizations can protect themselves against current and future threats.

Cybersecurity teams need to understand the implications of AI to take advantage of AI-enhanced defenses and remain one step ahead of threat actors.