Cybersecurity News in Asia

RECENT STORIES:

SEGA moves faster with flow-based network monitoring
Shadow AI – the hidden risk in APAC organizations
Unlocking cybersecurity’s hidden defenders to preempt cyber vulnerabil...
Have cybercriminals been increasingly targeting human touchpoints from...
Blockchain exploits highlight cross-chain security flaws
Data protection challenges in the agentic workspace
LOGIN REGISTER
CybersecAsia
  • Features
    • Featured

      Shadow AI – the hidden risk in APAC organizations

      Shadow AI – the hidden risk in APAC organizations

      Monday, November 24, 2025, 4:09 PM Asia/Singapore | Features
    • Featured

      Unlocking cybersecurity’s hidden defenders to preempt cyber vulnerabilities

      Unlocking cybersecurity’s hidden defenders to preempt cyber vulnerabilities

      Saturday, November 22, 2025, 8:17 AM Asia/Singapore | Features, Newsletter
    • Featured

      Blockchain exploits highlight cross-chain security flaws

      Blockchain exploits highlight cross-chain security flaws

      Friday, November 21, 2025, 10:30 AM Asia/Singapore | Features
  • Opinions
  • Tips
  • Whitepapers
  • Awards 2025
  • Directory
  • E-Learning

Select Page

Features

Shadow AI – the hidden risk in APAC organizations

By Victor Ng | Monday, November 24, 2025, 4:09 PM Asia/Singapore

Shadow AI – the hidden risk in APAC organizations

Prevalent AI adoption among organizations across Asia Pacific brings with it significant security challenges such as Shadow IT…

As if pre-existing cyber risks are not giving CISOs and company boards enough headaches in the day and nightmares at night, widespread use of generative AI and advances in agentic AI are bringing new pain points to business, technology and security leaders across the region.

Enterprises across Asia Pacific are rushing to embrace AI copilots and assistants, from Microsoft Copilot to ChatGPT Enterprise. Yet, many security and business leaders admit they don’t actually know which AI tools are in use across their organizations, how they’re being configured, or whether they’re exposing sensitive data.

A recent study from KPMG found that 57% of workers are hiding their AI use from employers, with nearly half uploading company data into public AI tools.

What do we do, now that traditional security approaches won’t work anymore? And how is Shadow AI casting a looming gloom on business resilience? We find out from Tomer Avni, VP of AI Security, Tenable.

What are the biggest AI security pain points CISOs and boards are grappling with today?

Tomer Avni: Organizations face significant challenges in securing their expanding AI attack surface due to a lack of visibility into the AI tools being used and AI manipulation.

Malicious MCP servers can manipulate AI agents by providing false tool descriptions or poisoned responses that trick AI systems into performing unauthorized actions. This problem is exacerbated when employees inadvertently share sensitive information while interacting with AI platforms and agents in ways that violate company policies.

Security teams often lack a comprehensive inventory of AI models, agents, data inputs and outputs, and integrations, making effective monitoring and control nearly impossible.

Traditional security approaches are insufficient to address these issues. To combat AI-driven threats, platforms such as Tenable One have been expanded with the introduction of Tenable AI Exposure, a comprehensive solution designed to gain comprehensive visibility, anticipate threats and prioritize efforts to prevent attacks associated with generative AI.

Why is Shadow AI a problem enterprises can’t afford to ignore?

Tomer Avni: Think of Shadow AI as the new Shadow IT, but worse. With Shadow IT, there was a way to spot unauthorized software or check devices on the network.

With AI, it’s much trickier because AI is everywhere. It’s in the apps, as browser plug-ins, in the cloud, and sometimes even running on devices without the recipient knowing. This creates a huge blind spot.

Employees are eager to use these tools to save time, and if IT or security doesn’t offer a safe option, they’ll just use public tools anyway. In fact, surveys show that over half of employees are hiding their AI use from their managers, and many are pasting sensitive company data into public platforms.

The danger is that if you ignore it, the risks don’t just disappear; they go underground. Data leaks out of the company go unnoticed. Output gets messed up with, leading to bad decisions.

As AI agents become more independent, their autonomous nature takes on the workload itself; they act. That means Shadow AI isn’t just an annoyance; it’s a direct threat to the business. If company boards take it lightly, they’re going to face some serious problems very soon.

How can organizations gain fuller visibility into AI copilots, agents, and assistants to mitigate risks such as data leakage, misconfigurations, and prompt injection attacks?

Tomer Avni: Organizations must first establish a comprehensive understanding of the AI platforms currently in use.

This initial step is often challenging, as AI can manifest in various forms, including browser extensions, embedded agents in productivity suites, or models operating within cloud environments. Therefore, organizations need to identify all AI tools in play, their users, and their toxic combinations to establish a foundational baseline.

Once this visibility is achieved, the subsequent step involves scrutinizing these systems for potential misconfigurations, which frequently harbor significant risks. It may be discovered that an assistant possesses excessive data access privileges or is connected to systems beyond their operational necessity.

For instance, an AI agent might be authorized to send emails or push code when its function is solely to read information. Such discrepancies can create vulnerabilities leading to data exposure and manipulation.

Following the identification of misconfigurations, a critical phase of prioritization and remediation is necessary. Not all risks carry equal weight; a bot generating marketing taglines presents a different risk profile than one integrated with source code or customer databases.

Consequently, organizations should address high-risk issues first by tightening permissions, disabling hazardous plug-ins, and restricting access to sensitive datasets.

Concurrently, it is imperative to consider the threat landscape, as prompt injection via model context protocol and poisoned data attacks are already prevalent. Systems must be evaluated for their resilience, and continuous monitoring for suspicious behavior is essential.

It is crucial to recognize that this endeavor is not a one-time project. AI tools are constantly evolving, with new features, plug-ins, and use cases emerging daily. Without continuous monitoring, organizations will perpetually find themselves in a reactive position. Achieving control over AI necessitates a continuous cycle of visibility, the identification and rectification of critical misconfigurations, and ongoing vigilance.

What should highly regulated industries such as finance and healthcare, and other data-heavy businesses, do now to align with frameworks like the EU AI Act?

Tomer Avni: In highly regulated industries, the significant shift is that merely stating a commitment to responsible AI will be insufficient; regulators will demand demonstrable proof.

The EU AI Act, for instance, emphasizes clear documentation, audit trails, and robust oversight. This necessitates that institutions such as banks or hospitals must be capable of precisely detailing which AI systems are in use, the data they are connected to, and the safeguards implemented to prevent misuse. Comprehensive records are essential to substantiate these claims, as compliance is practically impossible without such evidence.

Secondly, classification is crucial. Not all AI use cases present the same level of risk. Utilizing AI for tasks like generating marketing copy is likely considered low-risk.

However, employing AI to approve a loan or to assist a medical professional in making a diagnosis falls into the high-risk category. Such systems will be subject to more stringent requirements, including human oversight, tighter access controls, and continuous testing of outputs. Therefore, regulated industries must meticulously map their AI use cases to appropriate risk levels and apply corresponding controls.

Finally, the establishment of effective ‘acceptable AI use’ policies is paramount. Leading firms in this area have formed AI committees that integrate legal, compliance, security, and business leaders.

This interdisciplinary approach is vital because AI governance is not solely a security or a compliance issue; it encompasses both. Bringing these diverse perspectives to the table facilitates balanced decision-making. Organizations that establish such structures now will be significantly better positioned when regulators begin to pose challenging questions.

Share:

PreviousUnlocking cybersecurity’s hidden defenders to preempt cyber vulnerabilities

Related Posts

How a large social enterprise transformed to meet digital threats head on

How a large social enterprise transformed to meet digital threats head on

Tuesday, April 20, 2021

Financial institutions are struggling to keep up with fraud/cyber risks

Financial institutions are struggling to keep up with fraud/cyber risks

Thursday, August 3, 2023

Tackling the real security issues we face today

Tackling the real security issues we face today

Thursday, December 9, 2021

Not all multi-factor authentication systems are equal: here is why

Not all multi-factor authentication systems are equal: here is why

Tuesday, August 30, 2022

Leave a reply Cancel reply

You must be logged in to post a comment.

Voters-draw/RCA-Sponsors

Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
previous arrow
next arrow

CybersecAsia Voting Placement

Gamification listing or Participate Now

PARTICIPATE NOW

Vote Now -Placement(Google Ads)

Top-Sidebar-banner

Whitepapers

  • 2024 Insider Threat Report: Trends, Challenges, and Solutions

    2024 Insider Threat Report: Trends, Challenges, and Solutions

    Insider threats continue to be a major cybersecurity risk in 2024. Explore more insights on …Download Whitepaper
  • AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    The future of cybersecurity is a perfect storm: AI-driven attacks, cloud expansion, and the convergence …Download Whitepaper
  • Data Management in the Age of Cloud and AI

    Data Management in the Age of Cloud and AI

    In today’s Asia Pacific business environment, organizations are leaning on hybrid multi-cloud infrastructures and advanced …Download Whitepaper
  • Mitigating Ransomware Risks with GRC Automation

    Mitigating Ransomware Risks with GRC Automation

    In today’s landscape, ransomware attacks pose significant threats to organizations of all sizes, with increasing …Download Whitepaper

Middle-sidebar-banner

Case Studies

  • Meeting the business resilience challenges of digital transformation

    Meeting the business resilience challenges of digital transformation

    Data proves to be key to driving secure and sustainable digital transformation in Southeast Asia.Read more
  • Upgrading biometric authentication system protects customers in the Philippines: UnionDigital Bank

    Upgrading biometric authentication system protects customers in the Philippines: UnionDigital Bank

    An improved dual-liveness biometric framework can counter more deepfake threats, ensure compliance, and protect underbanked …Read more
  • HOSTWAY gains 73% operational efficiency for private cloud operations  

    HOSTWAY gains 73% operational efficiency for private cloud operations  

    With NetApp storage solutions, the Korean managed cloud service provider offers a lean, intelligent architecture, …Read more
  • CISOs can navigate emerging risks from autonomous AI with a new security framework

    CISOs can navigate emerging risks from autonomous AI with a new security framework

    See how security leaders can adopt layered strategies addressing intent, governance, and oversight to manage …Read more

Bottom sidebar

  • Our Brands
  • DigiconAsia
  • MartechAsia
  • Home
  • About Us
  • Contact Us
  • Sitemap
  • Privacy & Cookies
  • Terms of Use
  • Advertising & Reprint Policy
  • Media Kit
  • Subscribe
  • Manage Subscriptions
  • Newsletter

Copyright © 2025 CybersecAsia All Rights Reserved.