Insiders from the one frontline cybersecurity firm pitch their informed insights on what could hog the headlines this year.
The power and potential of agentic AI — adaptive, automated, and independent — dominated security conversations during 2025, although experts note ongoing challenges in governance and reliability.
What are frontline cybersecurity leaders expecting from emerging agentic-style AI in 2026, and what will this mean for cybersecurity, amid debates on its rush and hyped adoption?
Here are four predictions (edited for length and editorial style) from Barracuda sharing their insights…
1. Evolution of the threat landscape in 2026 and beyond
By next year, attacks may incorporate AI that behaves like an independent operator, making real-time choices — though full autonomy remains limited by current technical constraints. We already see AI automating parts of the kill chain such as in reconnaissance, phishing, and basic defense evasion.
The shift in 2026 could be towards systems that plan steps, learn from defenses in real time, and reroute with reduced human steering. The malicious AI operator could run much of the process end-to-end, gathering intelligence, crafting lures, trying paths, observing protection reactions, then shifting tactics and timing. These advanced tactics may resemble a coordinated system that strings steps together, learns from obstacles, and blends into normal activity — but oversight gaps persist.
Defenders should anticipate evolving attack types and tactics unlike prior potentially hard-to-explain patterns post-incident. The attack surface will expand, creating known and unknown gaps; zero-day exploitation may rise.

Yaz Bekkar, Principal Consulting Architect, XDR (EMEA).
2. More deepfakes and voice-fakes expected
Generative AI voice chatbots are now almost impossible to distinguish from real humans. The technology impresses under innocent circumstances, but threat actors could adapt it, despite detection risks. This could transform attacks, for example, in social engineering tactics against finance teams to steal banking details, or in deepfake impersonation scams to trick help desk MFA resets, enabling broader access.
Beyond risks, how will advanced AI enhance security? This year, advanced AI will be used to aid security operations center (SOC) teams, reducing reactive burdens for proactive work such as threat research. It will manage more administrative tasks, freeing analysts for emerging threats. Machine learning will expand threat detection by baselining behavior and traffic for anomaly scoring, yielding confident alerts that cut fatigue and false positives.

Eric Russo, Director, SOC Defensive Security.
3. Watch how the good guys lag behind in the bad actors in AI
Autonomous AI will evolve amid reliability concerns. It will be used to analyze data for vulnerabilities in real time, aiding threat actors in their exploitation of cyber weaknesses.
Emerging agentic-style AI could support phishing, defense monitoring, and CAPTCHA cracking, though scaling remains hypothetical.
Amid this cyber threat landscape, traditional defenses may lag AI adaptability. Investment in AI-enhanced threat detection and response will become necessary.
In the meantime, how can organizations protect their own agentic AI implementations? Cybersecurity leadership involves overseeing AI systems alongside people, enhancing productivity amid ethical challenges. This year, proficiency in processing and analytics will shape systems for business, ensuring responsible alignment with values — although governance will lag deployment.

Jesus Cordero-Guzman, Solution Architects (Application, Network Security and XDR), EMEA.
4. Guarding the IAM element of insidious AI threats
In 2026, emerging agentic AI may contribute to adaptive polymorphic malware that analyzes environments and rewrites code to evade signature and behavioral defenses. Multiple agents might coordinate with minimal supervision, risking hijacked interactions. Misuse of APIs, gateways, agentic service APIs, and chatbots could increase. API lifecycle management must handle dynamic interfaces agents create.
AI systems will require tight identity and access management (IAM), treating agents as entities with privileges. Extend zero trust to verify actions; monitor behavior for deviations. Secure agent communications with authentication, encryption, logging for traceability. Comply with the NIST AI Risk Management Framework.

Rohit Aradhya, VP and Managing Director, App Security Engineering.



