Like trying to fight guns with swords, traditional approaches are no longer enough to combat evolving cyberattacks.
The battle between cybercriminals and businesses has escalated: With the help of AI, malicious actors are posing increasingly sophisticated threats.
However, just as AI has empowered cybercriminals, it has also boosted cybersecurity. While violence is never the answer, in this digital warfare, fighting fire with fire is the only option for businesses to level the playing field against their attackers.
As cybercriminals weaponize and exploit AI to devise new strategies and outsmart the ‘good guys’, it is crucial for the CISOs to utilize the same technology to fend off attacks and protect critical infrastructure.
Therefore, it’s good news to learn that 82% of IT decision makers already plan to invest in AI-driven cybersecurity in the next two years.
Those that neglect to embrace AI will inevitably fall behind the rapid pace of cyberthreats. CybersecAsia sought out some insights into the AI battlefield in cybersecurity from Scott Robertson, Senior Vice President, Asia Pacific and Japan, Zscaler.
In a cybersecurity world of AI versus AI, what is the current state of AI in cybersecurity, and what upcoming developments can we expect in the future?
Robertson: AI has become an integral and transformative part of modern cybersecurity, and the current state of AI in cybersecurity can be categorized into two groups: AI for offence (used by cybercriminals) and AI for defense (used by cybersecurity professionals).On the offensive front, cybercriminals have increasingly harnessed AI techniques to launch sophisticated attacks. By leveraging AI, they can automate tasks, analyze vast volumes of data to identify vulnerabilities, craft convincing phishing emails, and even develop malware that can adapt and evolve to elude traditional security measures. This has made cyberattacks more efficient, targeted, stealthy, and difficult to detect.
Conversely, cybersecurity professionals have also embraced AI as a powerful tool to fortify their defense capabilities. AI-powered security solutions can detect anomalies, identify patterns, and analyze large volumes of data to detect and respond to threats in real-time. Machine learning algorithms continuously learn from past attacks, enabling them to enhance their efficacy in mitigating new and evolving threats.
Furthermore, AI aids in automating routine tasks, streamlining processes, and accelerating response times, enabling security teams to focus on tackling more intricate and sophisticated threats.
Looking towards the future, the landscape of AI in cybersecurity presents both the prospect of increasingly malicious attacks, but also even more promising advancements in defensive applications. AI algorithms will continue to evolve, equipping cybersecurity systems with enhanced threat detection capabilities. Through the utilization of techniques such as behavioral analysis, anomaly detection, and predictive analytics, AI will enable the identification of previously unknown and sophisticated threats.
These developments will empower organizations to bolster their cybersecurity posture and proactively safeguard their critical assets from emerging risks.
What are the challenges and opportunities posed by generative AI for cybersecurity?
Robertson: In the ever-evolving landscape of cybersecurity, AI’s rise has brought both challenges and opportunities for defending against cyber threats.
Challenges:
- Synthetic content: Generative AI creates synthetic content that closely resembles real data, challenging the identification of fake information and manipulation of models. This poses privacy concerns as attackers can exploit this capability to generate convincing fakes, risking privacy breaches and identity theft.
- Cybercriminal evasion techniques: Generative AI enables the development of evasion techniques that bypass traditional security measures, increasing the difficulty of threat detection and mitigation. Attackers can generate malware variants that elude signature-based detection systems, allowing them to evade detection and potentially inflict significant damage.
- Scalability and Computational Requirements: Generative AI models can demand significant computational resources, require substantial resources for training and inference, and present scalability challenges for real-time cybersecurity applications, especially with large-scale data streams.
Opportunities:
- Enhanced threat and anomaly detection: Leveraging generative AI, organizations can improve threat detection by learning patterns from large datasets and synthetic samples. This enables the identification of anomalies in network traffic, user behavior, or system logs, bolstering the detection of potential security breaches. From accurately recommending policies to performing impact analyses effectively, AI helps in simplifying security operations. The continuous learning of AI also allows for a proactive approach to breach prediction.
- Synthetic data generation for training: Utilizing generative AI, organizations can create synthetic datasets that augment limited real-world data, enhancing the performance and robustness of cybersecurity systems. This approach addresses data scarcity concerns and enables more comprehensive AI model training, ensuring more effective defense against cyber threats.
- Increasingly robust defenses: Developing AI techniques to detect and counter AI-generated content and adversarial attacks, strengthening cybersecurity defenses. Generative AI’s comprehensive risk classification aids vulnerability analysis and prioritizes patching efforts, bolstering overall security infrastructure.
By adopting these opportunities and implementing secure AI solutions, organizations can navigate the transformative power of generative AI to bolster their cybersecurity measures.
Where do you see the role of human intelligence in collaborating with AI to enhance cyber-defenses?
Robertson: While AI has taken the industry to a whole new level, human intelligence still plays a fundamental and critical role in enhancing cyber-defenses. AI brings advanced capabilities to detect and respond to cyber threats, but human intelligence provides essential context, critical thinking, and decision-making abilities that complement AI’s capabilities.
Firstly, humans possess the ability to contextualize, and understand nuances and evolving trends in the cybersecurity landscape. Humans still play a primary role in interpreting and assessing the implications of AI-generated insights and findings, including factors such as business objectives, legal and ethical considerations, and the specific needs of an organization.
Secondly, humans bring an adversarial mindset to the table, thinking creatively and strategically to anticipate and counteract potential attacks. This includes identifying emerging threats that AI algorithms may not have encountered before and developing proactive strategies to mitigate risks.
Ethical considerations also fall within the domain of human intelligence. Humans provide ethical oversight and ensure responsible use of AI in cybersecurity. We can evaluate the potential impact of AI on privacy, human rights, and societal implications, making informed decisions about the deployment and limitations of AI in cyber-defense.
Lastly, humans possess the ability to continuously learn, adapt, and update their knowledge and skills. In the ever-changing cybersecurity landscape, human intelligence is essential for staying a step ahead of emerging threats, understanding new attack techniques, and adapting defense strategies accordingly.
Overall, the collaboration between human intelligence and AI in cybersecurity is a symbiotic relationship, where AI augments human capabilities and assists in handling the increasing volume and sophistication of cyber threats. By combining the strengths of human intelligence and AI, organizations can build more effective and resilient cyber-defense strategies.
In what areas is Zscaler leveraging AI technologies to help organizations improve their risk and security postures?
Robertson: Zscaler has been leveraging AI/ML to enhance customer safety and is constantly introducing new AI-based security measures to combat the latest attacks, strengthen data protection, and secure the usage of generative AI.
For example, Zscaler’s exclusive large language models (LLMs) are fully integrated with the world’s largest security cloud, which is supported by a data lake containing over 300 billion daily transactions. Here are some ways that Zscaler’s AI capabilities are benefiting organizations:
- AI-powered segmentation: Organizations can reduce risks by automatically identifying application segments and minimizing their internal attack surface. This enables the creation of appropriate zero trust access policies.
- Fast time to data protection: Business leaders can accelerate data protection programs by immediately protecting data through ML-based automatic data classification. No manual configuration is required.
- AI-driven root cause analysis: Identifying the root causes of poor experiences, at a rate of 180 times faster than traditional methods. This enables quick resolution, reduces downtime (MTTR), and frees up IT resources from time-consuming troubleshooting and analysis.
- AI-driven sandboxing verdicts: Prevent infections by leveraging AI that can instantly detect malicious files. This allows for swift decision-making without allowing potentially harmful files into your organization while awaiting a sandbox verdict.
- Effective sensitive data protection: Safeguard sensitive data while retaining AI prompts and the output of AI applications for security and audit purposes.
- Strengthening security posture against AI app risks: Obtain comprehensive risk scoring for AI applications, enabling better control over usage and managing potential risks.
- Ensure secure use of tools like ChatGPT: Have granular control over AI application usage, allowing for the implementation of different policies for different users, ensuring secure utilization of tools such as ChatGPT.
- Limit risky actions in AI apps: Prevent actions that could put data at risk, such as uploads, downloads, and copy/paste functions, by utilizing Zscaler Browser Isolation.