As a double-edged sword, the technology has two faces, and wielders need to use it while being also cognizant of risks
AI systems are especially adept at identifying patterns in massive amounts of data, and making predictions based on that data.
For example, business email compromise (BEC) attacks have been growing in frequency, and they can avoid multiple email security tools because they typically do not have payloads such as links or attachments.
Additionally, traditional API-based email security solutions scan the threats post-delivery, which often requires very time-consuming effort by IT or security teams to populate the tool with data. Since this approach does not scale well, many teams choose instead to only implement those controls for a select group such as senior executives, which leaves threat actors with an opportunity to target a much broader category of people within the organization when carrying out payroll diversion attacks.
Using generative AI for scanning
AI/ML-driven threat detection, along with LLM-based pre-delivery detection, can be used to interpret the contextual tone and intent of an email. This pre-delivery approach protects organizations by blocking fraudulent and malicious emails before they reach employees, greatly reducing exposure to threats like BEC.
However, to work well, AI and ML solutions need massive amounts of high-quality data because the models learn from patterns and examples rather than rules.
This is why training models with millions of daily emails from a worldwide threat intelligence ecosystem is crucial: it ensures higher-fidelity detection and gives security or IT teams the confidence in the effectiveness of their security.
Also, when considering new cybersecurity solutions that rely on AI and ML, the following questions need to be addressed:
- Where does the product vendor get their data for training algorithms? Obtaining data for general-purpose AI applications is easy, but threat intelligence data is not as readily available. The training data used by the vendor should reflect not only real-world scenarios, but also threats that are specific to the organization and its employee.
- What does the product use in the detection stack to supplement AI/ML? Intelligent technology is not as efficient, effective, or reliable for some types of threats. It is crucial for a security solution to integrate other techniques, such as rules and signatures, or a “human-in-the-loop” model to leverage IT teams’ expertise without giving up the speed and self-learning benefits of AI in the production environment.
- Is generative AI is optimal for your organization’s specific challenges? AI models are complex and computationally intensive, and may take longer to execute than less complicated functionalities. Sometimes rules-based techniques may be more effective, especially when fast response is critical. It is therefore necessary to understanding the security objective of your organization and what path is best for solving the problem.
The dark side of GenAI
As the security community tries to understand the implications of AI (and generative AI), we cannot overlook the fact that bad actors can also use it to their advantage.
Threat actors are already abusing this technology, using open-source large language models to develop malicious tools such as WormGPT, FraudGPT, and DarkBERT. These tools enable attackers to craft much better phishing emails, and translate them into many more languages.
Threat actors will not give up their existing tactics or reinvent the wheel as long as their current models remain lucrative. Defenders must keep a sharp focus on the more immediate threats, and ensure they have foundational defenses in place.