Prevalent AI adoption among organizations across Asia Pacific brings with it significant security challenges such as Shadow IT…
As if pre-existing cyber risks are not giving CISOs and company boards enough headaches in the day and nightmares at night, widespread use of generative AI and advances in agentic AI are bringing new pain points to business, technology and security leaders across the region.
Enterprises across Asia Pacific are rushing to embrace AI copilots and assistants, from Microsoft Copilot to ChatGPT Enterprise. Yet, many security and business leaders admit they don’t actually know which AI tools are in use across their organizations, how they’re being configured, or whether they’re exposing sensitive data.
A recent study from KPMG found that 57% of workers are hiding their AI use from employers, with nearly half uploading company data into public AI tools.
What do we do, now that traditional security approaches won’t work anymore? And how is Shadow AI casting a looming gloom on business resilience? We find out from Tomer Avni, VP of AI Security, Tenable.
What are the biggest AI security pain points CISOs and boards are grappling with today?
Tomer Avni: Organizations face significant challenges in securing their expanding AI attack surface due to a lack of visibility into the AI tools being used and AI manipulation.
Malicious MCP servers can manipulate AI agents by providing false tool descriptions or poisoned responses that trick AI systems into performing unauthorized actions. This problem is exacerbated when employees inadvertently share sensitive information while interacting with AI platforms and agents in ways that violate company policies.
Security teams often lack a comprehensive inventory of AI models, agents, data inputs and outputs, and integrations, making effective monitoring and control nearly impossible.
Traditional security approaches are insufficient to address these issues. To combat AI-driven threats, platforms such as Tenable One have been expanded with the introduction of Tenable AI Exposure, a comprehensive solution designed to gain comprehensive visibility, anticipate threats and prioritize efforts to prevent attacks associated with generative AI.
Why is Shadow AI a problem enterprises can’t afford to ignore?
Tomer Avni: Think of Shadow AI as the new Shadow IT, but worse. With Shadow IT, there was a way to spot unauthorized software or check devices on the network.
With AI, it’s much trickier because AI is everywhere. It’s in the apps, as browser plug-ins, in the cloud, and sometimes even running on devices without the recipient knowing. This creates a huge blind spot.
Employees are eager to use these tools to save time, and if IT or security doesn’t offer a safe option, they’ll just use public tools anyway. In fact, surveys show that over half of employees are hiding their AI use from their managers, and many are pasting sensitive company data into public platforms.
The danger is that if you ignore it, the risks don’t just disappear; they go underground. Data leaks out of the company go unnoticed. Output gets messed up with, leading to bad decisions.
As AI agents become more independent, their autonomous nature takes on the workload itself; they act. That means Shadow AI isn’t just an annoyance; it’s a direct threat to the business. If company boards take it lightly, they’re going to face some serious problems very soon.
How can organizations gain fuller visibility into AI copilots, agents, and assistants to mitigate risks such as data leakage, misconfigurations, and prompt injection attacks?
Tomer Avni: Organizations must first establish a comprehensive understanding of the AI platforms currently in use.
This initial step is often challenging, as AI can manifest in various forms, including browser extensions, embedded agents in productivity suites, or models operating within cloud environments. Therefore, organizations need to identify all AI tools in play, their users, and their toxic combinations to establish a foundational baseline.
Once this visibility is achieved, the subsequent step involves scrutinizing these systems for potential misconfigurations, which frequently harbor significant risks. It may be discovered that an assistant possesses excessive data access privileges or is connected to systems beyond their operational necessity.
For instance, an AI agent might be authorized to send emails or push code when its function is solely to read information. Such discrepancies can create vulnerabilities leading to data exposure and manipulation.
Following the identification of misconfigurations, a critical phase of prioritization and remediation is necessary. Not all risks carry equal weight; a bot generating marketing taglines presents a different risk profile than one integrated with source code or customer databases.
Consequently, organizations should address high-risk issues first by tightening permissions, disabling hazardous plug-ins, and restricting access to sensitive datasets.
Concurrently, it is imperative to consider the threat landscape, as prompt injection via model context protocol and poisoned data attacks are already prevalent. Systems must be evaluated for their resilience, and continuous monitoring for suspicious behavior is essential.
It is crucial to recognize that this endeavor is not a one-time project. AI tools are constantly evolving, with new features, plug-ins, and use cases emerging daily. Without continuous monitoring, organizations will perpetually find themselves in a reactive position. Achieving control over AI necessitates a continuous cycle of visibility, the identification and rectification of critical misconfigurations, and ongoing vigilance.
What should highly regulated industries such as finance and healthcare, and other data-heavy businesses, do now to align with frameworks like the EU AI Act?
Tomer Avni: In highly regulated industries, the significant shift is that merely stating a commitment to responsible AI will be insufficient; regulators will demand demonstrable proof.
The EU AI Act, for instance, emphasizes clear documentation, audit trails, and robust oversight. This necessitates that institutions such as banks or hospitals must be capable of precisely detailing which AI systems are in use, the data they are connected to, and the safeguards implemented to prevent misuse. Comprehensive records are essential to substantiate these claims, as compliance is practically impossible without such evidence.
Secondly, classification is crucial. Not all AI use cases present the same level of risk. Utilizing AI for tasks like generating marketing copy is likely considered low-risk.
However, employing AI to approve a loan or to assist a medical professional in making a diagnosis falls into the high-risk category. Such systems will be subject to more stringent requirements, including human oversight, tighter access controls, and continuous testing of outputs. Therefore, regulated industries must meticulously map their AI use cases to appropriate risk levels and apply corresponding controls.
Finally, the establishment of effective ‘acceptable AI use’ policies is paramount. Leading firms in this area have formed AI committees that integrate legal, compliance, security, and business leaders.
This interdisciplinary approach is vital because AI governance is not solely a security or a compliance issue; it encompasses both. Bringing these diverse perspectives to the table facilitates balanced decision-making. Organizations that establish such structures now will be significantly better positioned when regulators begin to pose challenging questions.



