As organizations accelerated AI adoption in 2025, a new and underappreciated category of third-party risk has emerged: agentic AI.
Unlike traditional third-party vendor risks centered on shared credentials or software vulnerabilities, agentic AI represents autonomous, credentialed AI agents that interact with and execute tasks across enterprise systems without human intervention or explicit permission.
Omer Grossman, CIO, CyberArk, revealed that while 72% of employees now use AI tools, 59% of organizations lack identity management protocols for these autonomous AI agents.
This gap exposes enterprises to unwitting insider threats and dynamic security risks as these AI agents perform critical actions — ranging from updating sensitive customer data, dispatching emails, to provisioning cloud resources — entirely on their own.
CybersecAsia gained more insights for 2026 from Grossman in this interview:
What unique risks do autonomous AI agents introduce to an organization?
Grossman: The world is moving beyond predefined automation to an autonomous, undeterministic decision-making. Artificial intelligence (AI) agents are executing tasks and making decisions at machine speed. That is why trust models require a rethink before these agents become too deeply embedded.
Beyond the traditional human and machine identities, agents are a new type of entity – acting like humans, at the speed and scale of machines. As such, we need to address them differently.
From our observation, three unique risks stand out. The first is unsupervised decision-making, something that enterprises are not accustomed to since governance structures are built on accountability. Autonomous agents work beyond these boundaries, making decisions at machine speed that can propagate bias. Without clear oversight, small errors can scale into systemic failures.
The second unique risk relates to privilege escalation and credential misuse. AI agents often operate with high privileges but with less transparency. If an agent is compromised, its wide access can amplify the blast radius of an attack. Agent behavior is not deterministic, and this makes traditional access control insufficient.
The third unique risk is auditing, as agents can trigger unexpected outcomes that are hard to trace or explain. Imagine a chain of AI agents performing cross-platform tasks. One misinterprets an instruction, and suddenly, sensitive data is at risk.
Humans make mistakes too, but in those cases, the cause can be traced and remediated through established change management. With autonomous agents, observability and accountability become far more complex.
Organizations need new ways to validate, monitor, and trust AI agents because the systems used to secure automation will not be enough to secure autonomy.
How does the credentialed nature of agentic AI change the way third-party risk should be managed, particularly when it comes to privileged access and identity governance?
Grossman: What we are observing now is something deeper and more complex. Every major Software-as-a-Service (SaaS) platform today integrates AI agents. Salesforce uses them for customer relationship management (CRM) productivity and ServiceNow uses them for analytics and automation.
Though these agents deliver value, they also introduce a new visibility gap. The average CISO or CIO may not know precisely what agents can access. Within Salesforce, for example, only a few administrators typically have full data access. What happens then when an AI agent, deployed and managed by Salesforce, is granted system-level permissions that were not explicitly approved? There was one case where a vendor-deployed AI agent came with hard-coded credentials that bypassed standard onboard authentication checks.
If a vendor’s internal AI service is built on another company’s foundation model like Llama, DeepSeek, or GPT, then that provider becomes your fourth party, and this can go on until the nth party.
If that model uses training data or software components from open-source repositories, you are indirectly exposed to potential vulnerabilities all the way upstream.
This supply chain risk can propagate into the algorithmic layer itself. As an industry, we strengthened SaaS Security Posture Management (SSPM) and enforced least privilege on third-party integrations. In response, attackers moved upstream to target fourth parties (like in the case of Salesloft-Drift where the attackers targeted the Salesforce integration and from there hundreds of enterprises were compromised).
Organizations must evolve from securing visible third-party systems to managing invisible algorithmic dependencies. Dynamic privilege enforcement and cross-vendor visibility must become standard practice. Zero Trust for AI ensures this by assuming that even trusted partners’ autonomous systems could be compromised. And the least privilege principle still holds.



