3. Considering agentic AI may chain tasks and propagate vulnerabilities across systems, how should risk assessment frameworks evolve to address this?
Grossman: First, we need to move towards behavioral and intent-based security models. Although the exact implementation remains a work in progress, these models focus on understanding agents’ behaviors and underlying intent rather than static rules. This aligns with dynamic risk detection, capturing suspicious activities that traditional perimeter defenses might miss.
Second, it is critical to incorporate attack path simulation and policy-as-code practices. As a cybersecurity practitioner, I operate internal attack simulations 24/7 within my environment. If an attack is successful, that immediately triggers an alert. Then, we’ll go in and fix the issue. It allows me to figure out and close the vulnerabilities myself before an attacker is able to exploit them. This constant evolution of attack scenarios expands the use-case coverage and improves overall cyber resilience.
When it comes to managing AI agents specifically, we need to expand these simulations to model AI-driven attack vectors and behaviors. This will allow us to detect and capture risks that are unique to autonomous decision-making processes.
Finally, the security policies governing autonomous agents must continually evolve. They must adopt policy-as-code frameworks that enable dynamic, continuous adaptation, similar to continuous deployment in software delivery. By adopting these frameworks, policies can evolve as threat landscapes and autonomous system behaviors change, maintaining effective risk control.
How can organizations detect and achieve visibility over agentic AI usage when these agents operate autonomously and sometimes without IT oversight?
Grossman: Effective governance of AI agents demands comprehensive inventory and discovery capabilities. You can protect what you cannot see, so inventory is mandatory. It is important that organizations can distinguish between known assets and unknown (or shadow) AI implementations.
The best AI agents are dynamic entities that adapt over time. Therefore, continuous monitoring using telemetry, behavioral analytics, and other signals is essential to track presence and detect derailing behaviors against established baselines.
AI agents also require substantial computation resources, which can be costly. Without accurate visibility into AI agent inventory and usage, enterprises risk uncontrolled cloud expenditure. Imagine waking up to find that thousands of autonomous agents triggered extensive compute tasks overnight, potentially costing hundreds of thousands of dollars!
We need to treat AI agents as what they are – digital actors operating at machine speed. First, we need to get full visibility of agents’ actions. We need to know where they exist, what they are accessing, and who is responsible for them. If they connect or act, they should be part of your identity security program.
Next, we need to limit access. Mechanisms such as Model Context Protocol (MCP) standardize how AI agents connect to external tools, prompts, and data. These are powerful, but could expose sensitive records or trigger actions based on flawed logic. That makes it a critical new entry point, not just part of a workflow. Guard these connections accordingly.
Last but not least, establish behavioral controls. AI agents move fast, so the organization’s security must move at that same speed. Instead of static rules, set dynamic boundaries focused on behavior, risk level, and business roles.
What are some critical steps organizations should take to prepare for growing third-party risks from agentic AI, especially with regards to transparency, audit trails, and compliance?
Grossman: At its core, managing AI agents is fundamentally an identity security challenge because these agents act as privileged identities within enterprise systems. From the perspective of a CISO or CIO, it is critical to mandate transparency from vendors when it comes to use of AI agents. At this point, it should be a formal requirement in vendor risk and security policies.
The rise in popularity of generative AI prompted a wave of legal and compliance measures to ensure large language models did not use proprietary data to train external models or, at the very least, offered opt-out options. As AI agents bring new demands, enterprises must require vendors to disclose how their agents are designed, what goals they pursue, and the reasoning mechanisms they employ, to the greatest extent possible.
At the same time, it is critical to maintain immutable audit trails for every AI agent action. Comprehensive auditing enables traceability, accountability, and forensic analysis. This allows enterprises to reconstruct sequences of events and understand the rationale behind decisions. This level of transparency forms the backbone of governance over autonomous systems.
Finally, compliance frameworks must evolve to cover AI agents explicitly. Existing certifications such as SOC 2 should expand controls to include AI agent governance, and emerging standards such as ISO/IEC AI 42 001 need to be integrated. The field is still evolving, but early adoption of such controls will be vital.



