Cybersecurity News in Asia

RECENT STORIES:

SEGA moves faster with flow-based network monitoring
PQShield Advances Japan’s Quantum-Safe Security Transition Throu...
Cambridge Global Advisors Awarded Grant to Build Cyber Capacity Among ...
NEC Appoints New Managing Director to Drive Next Phase of Growth in Ma...
Researchers warn cloud-centric passkey system may expose new vulnerabi...
Dragos Appoints Kaori Nieda as Country Manager to Drive Expansion in J...
LOGIN REGISTER
CybersecAsia
  • Features
    • Featured

      Agentic AI: The next great productivity hack or the ultimate security nightmare of 2026?

      Agentic AI: The next great productivity hack or the ultimate security nightmare of 2026?

      Wednesday, March 18, 2026, 3:00 PM Asia/Singapore | Features, Newsletter
    • Featured

      Misconfigured AI: Hype or real threat to APAC Infrastructure?

      Misconfigured AI: Hype or real threat to APAC Infrastructure?

      Monday, March 16, 2026, 7:36 PM Asia/Singapore | Features, Tips
    • Featured

      Building trust in Asia’s financial sector with digital identity innovations

      Building trust in Asia’s financial sector with digital identity innovations

      Monday, March 16, 2026, 9:45 AM Asia/Singapore | Features, Newsletter
  • Opinions
  • Tips
  • Whitepapers
  • Awards 2025
  • Directory
  • E-Learning

Select Page

Features

Emerging third-party cyber risks via agentic AI

By Victor Ng | Tuesday, February 3, 2026, 10:22 AM Asia/Singapore

Emerging third-party cyber risks via agentic AI

3. Considering agentic AI may chain tasks and propagate vulnerabilities across systems, how should risk assessment frameworks evolve to address this?

Grossman: First, we need to move towards behavioral and intent-based security models. Although the exact implementation remains a work in progress, these models focus on understanding agents’ behaviors and underlying intent rather than static rules. This aligns with dynamic risk detection, capturing suspicious activities that traditional perimeter defenses might miss.

Second, it is critical to incorporate attack path simulation and policy-as-code practices. As a cybersecurity practitioner, I operate internal attack simulations 24/7 within my environment. If an attack is successful, that immediately triggers an alert. Then, we’ll go in and fix the issue. It allows me to figure out and close the vulnerabilities myself before an attacker is able to exploit them. This constant evolution of attack scenarios expands the use-case coverage and improves overall cyber resilience.

When it comes to managing AI agents specifically, we need to expand these simulations to model AI-driven attack vectors and behaviors. This will allow us to detect and capture risks that are unique to autonomous decision-making processes.

Finally, the security policies governing autonomous agents must continually evolve. They must adopt policy-as-code frameworks that enable dynamic, continuous adaptation, similar to continuous deployment in software delivery. By adopting these frameworks, policies can evolve as threat landscapes and autonomous system behaviors change, maintaining effective risk control.

How can organizations detect and achieve visibility over agentic AI usage when these agents operate autonomously and sometimes without IT oversight?

Grossman: Effective governance of AI agents demands comprehensive inventory and discovery capabilities. You can protect what you cannot see, so inventory is mandatory. It is important that organizations can distinguish between known assets and unknown (or shadow) AI implementations.

The best AI agents are dynamic entities that adapt over time. Therefore, continuous monitoring using telemetry, behavioral analytics, and other signals is essential to track presence and detect derailing behaviors against established baselines.

AI agents also require substantial computation resources, which can be costly. Without accurate visibility into AI agent inventory and usage, enterprises risk uncontrolled cloud expenditure. Imagine waking up to find that thousands of autonomous agents triggered extensive compute tasks overnight, potentially costing hundreds of thousands of dollars!

We need to treat AI agents as what they are – digital actors operating at machine speed. First, we need to get full visibility of agents’ actions. We need to know where they exist, what they are accessing, and who is responsible for them. If they connect or act, they should be part of your identity security program.

Next, we need to limit access. Mechanisms such as Model Context Protocol (MCP) standardize how AI agents connect to external tools, prompts, and data. These are powerful, but could expose sensitive records or trigger actions based on flawed logic. That makes it a critical new entry point, not just part of a workflow. Guard these connections accordingly.

Last but not least, establish behavioral controls. AI agents move fast, so the organization’s security must move at that same speed. Instead of static rules, set dynamic boundaries focused on behavior, risk level, and business roles.

What are some critical steps organizations should take to prepare for growing third-party risks from agentic AI, especially with regards to transparency, audit trails, and compliance?

Grossman: At its core, managing AI agents is fundamentally an identity security challenge because these agents act as privileged identities within enterprise systems. From the perspective of a CISO or CIO, it is critical to mandate transparency from vendors when it comes to use of AI agents. At this point, it should be a formal requirement in vendor risk and security policies.

The rise in popularity of generative AI prompted a wave of legal and compliance measures to ensure large language models did not use proprietary data to train external models or, at the very least, offered opt-out options. As AI agents bring new demands, enterprises must require vendors to disclose how their agents are designed, what goals they pursue, and the reasoning mechanisms they employ, to the greatest extent possible.

At the same time, it is critical to maintain immutable audit trails for every AI agent action. Comprehensive auditing enables traceability, accountability, and forensic analysis. This allows enterprises to reconstruct sequences of events and understand the rationale behind decisions. This level of transparency forms the backbone of governance over autonomous systems.

Finally, compliance frameworks must evolve to cover AI agents explicitly. Existing certifications such as SOC 2 should expand controls to include AI agent governance, and emerging standards such as ISO/IEC AI 42 001 need to be integrated. The field is still evolving, but early adoption of such controls will be vital.

Pages: 1 2

Share:

PreviousFraud Syndicates Now Operate Like Businesses: VIDA Urges Malaysian CISOs to Rethink AI-Era Defense
NextAndroid spyware campaign exploits AI platform and accessibility services to evade detection

Related Posts

Wanted: simple, reliable backup and recovery for the cloud era

Wanted: simple, reliable backup and recovery for the cloud era

Tuesday, August 23, 2022

More cybersecurity predictions for 2025

More cybersecurity predictions for 2025

Tuesday, January 14, 2025

Know the two buzzwords making waves in next-generation cybersecurity efforts

Know the two buzzwords making waves in next-generation cybersecurity efforts

Monday, September 30, 2024

How anyone can become a rich cybercriminal by using generative AI

How anyone can become a rich cybercriminal by using generative AI

Tuesday, July 18, 2023

Leave a reply Cancel reply

You must be logged in to post a comment.

Voters-draw/RCA-Sponsors

Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
previous arrow
next arrow

CybersecAsia Voting Placement

Gamification listing or Participate Now

PARTICIPATE NOW

Vote Now -Placement(Google Ads)

Top-Sidebar-banner

Whitepapers

  • Closing the Gap in Email Security:How To Stop The 7 Most SinisterAI-Powered Phishing Threats

    Closing the Gap in Email Security:How To Stop The 7 Most SinisterAI-Powered Phishing Threats

    Insider threats continue to be a major cybersecurity risk in 2024. Explore more insights on …Download Whitepaper
  • 2024 Insider Threat Report: Trends, Challenges, and Solutions

    2024 Insider Threat Report: Trends, Challenges, and Solutions

    Insider threats continue to be a major cybersecurity risk in 2024. Explore more insights on …Download Whitepaper
  • AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    The future of cybersecurity is a perfect storm: AI-driven attacks, cloud expansion, and the convergence …Download Whitepaper
  • Data Management in the Age of Cloud and AI

    Data Management in the Age of Cloud and AI

    In today’s Asia Pacific business environment, organizations are leaning on hybrid multi-cloud infrastructures and advanced …Download Whitepaper

Middle-sidebar-banner

Case Studies

  • Cyber protection for medical clinics in Singapore

    Cyber protection for medical clinics in Singapore

    As Singapore’s healthcare sector becomes increasingly digital and interconnected, clinics are facing heightened cyber risks, …Read more
  • India’s WazirX strengthens governance and digital asset security

    India’s WazirX strengthens governance and digital asset security

    Revamping its custody infrastructure using multi‑party computation tools has improved operational resilience and institutional‑grade safeguardsRead more
  • Bangladesh LGED modernizes communication while addressing data security concerns

    Bangladesh LGED modernizes communication while addressing data security concerns

    To meet emerging data localization/privacy regulations, the government engineering agency deploys a secure, unified digital …Read more
  • What AI worries keep members of the Association of Certified Fraud Examiners sleepless?

    What AI worries keep members of the Association of Certified Fraud Examiners sleepless?

    This case study examines how many anti-fraud professionals reported feeling underprepared to counter rising AI-driven …Read more

Bottom sidebar

Other News

  • PQShield Advances Japan’s Quantum-Safe Security Transition Through CRYPTREC ML-KEM Evaluation

    Friday, April 3, 2026
    Japan can now begin deploying …Read More »
  • Cambridge Global Advisors Awarded Grant to Build Cyber Capacity Among Women in the Pacific from Australia’s Department of Foreign Affairs and Trade

    Thursday, April 2, 2026
    WASHINGTON, April 2, 2026 /PRNewswire/ …Read More »
  • NEC Appoints New Managing Director to Drive Next Phase of Growth in Malaysia

    Thursday, April 2, 2026
    KUALA LUMPUR, Malaysia, April 1, …Read More »
  • Dragos Appoints Kaori Nieda as Country Manager to Drive Expansion in Japan

    Wednesday, April 1, 2026
    HANOVER, Md., April 1, 2026 …Read More »
  • Kingsoft Cloud Announces Unaudited Fourth Quarter and Fiscal Year 2025 Financial Results

    Wednesday, March 25, 2026
    BEIJING, March 25, 2026 /PRNewswire/ …Read More »
  • Our Brands
  • DigiconAsia
  • MartechAsia
  • Home
  • About Us
  • Contact Us
  • Sitemap
  • Privacy & Cookies
  • Terms of Use
  • Advertising & Reprint Policy
  • Media Kit
  • Subscribe
  • Manage Subscriptions
  • Newsletter

Copyright © 2026 CybersecAsia All Rights Reserved.