Netskope Threat Labs recently released its 2025 Asia Threat Labs Report, uncovering how AI risk in the region is moving beyond generative AI to enterprise AI platforms and agents.
The research found that while organizations in Asia are successfully bringing generative AI (GenAI) risks under control, new data exposure challenges are emerging with the rise of enterprise AI platforms and autonomous AI agents.
Based on anonymized data from employees across 37 Asian countries, the report outlines the next wave of AI-related, cloud, and data security risks shaping Asia’s digital economy.
Some key findings include:
- 93% of organizations in Asia now observe GenAI usage among employees — with policy violations dropping sharply as companies adopt data loss prevention (DLP) and real-time user coaching tools.
- AI risk is shifting — nearly half (46%) of Asian organisations are already using enterprise-grade AI platforms such as Microsoft Azure OpenAI and Amazon Bedrock, signalling the next phase of AI adoption.
- New supply chain vulnerabilities are emerging as AI agents and open-source models connect to cloud APIs like api.openai.com (57%), often without IT oversight.
- Persistent cloud malware and data-leak risks remain — with OneDrive, GitHub, and Google Drive among the most common malware delivery vectors in Asia.
GenAI adoption at the workplace
In the last twelve months, genAI adoption has continued to grow, and today, 93% of organizations based in Asia are observing genAI usage among their workforce, up from 84% in late 2024.
The three most used applications have followed different trajectories: ChatGPT usage remained relatively flat over the period (75%; +3pts), while Google Gemini soared (74%; +29pts) and Microsoft Copilot saw consistent growth (51%; +12pts). Reflecting global trends, DeepSeek is the most blocked genAI application in Asian workplaces (46%).
Sensitive data leakage
Employees are still regularly–though in most cases inadvertently–attempting to leak sensitive data when using genAI apps, with source code accounting for the majority of data policy violations (62%), followed by regulated data (18%), including personal, financial and healthcare information, and intellectual property (14%).
Additionally, a proportion of GenAI usage is still occurring outside of IT and security teams’ purview–a trend known as “shadow AI”–with more than one in three employees (35%) still using personal genAI accounts at work, preventing many security teams from detecting potential data leaks.
While this figure is still relatively high, it is a steep decline from late 2024 levels (79%), which coincides with a rise in the deployment of organization-approved genAI solutions among workforces, from 17% to 55% over the same period. This trend, along with the growing deployment of data security guardrails around genAI usage (using tools such as data loss prevention (DLP) or real-time user coaching), is a clear sign that organizations are successfully bringing genAI-related “shadow AI” under control.
Rise of AI agents and shadow AI
Now, with the rise of AI agents, data security and “shadow AI” risks are shifting. Organizations based in Asia are moving to enterprise-grade AI platforms that allow them to build and deploy private and custom AI on their own secure infrastructure, including genAI models for specific functions and departments, or AI agents that can autonomously perform complex tasks.
Almost half of Asian workplaces (46%) are observing usage of AI platforms such as Microsoft Azure OpenAI or Amazon Bedrock. For companies, the risk lies in employees experimenting with tools without the knowledge of IT or security teams in the same way they did when genAI apps emerged in 2023. Some sophisticated AI tools such as LLM interfaces have little to no embedded security, and require checks and adjustments to make sure they can be used securely.
Security teams also need to restrict permissions of AI agents to ensure they are not accessing and exposing sensitive data when training or executing their tasks.
This is especially important as AI agents deployed on-premises often rely on external models hosted in the cloud, connecting to dedicated API endpoints.
For example, more than half of organizations in Asia (57%) are connecting to api.openai.com, suggesting that they run AI tools and agents relying on OpenAI’s model. Building custom AI models or agents on-premises also introduces AI supply chain risks as attackers often distribute infected open source AI models and tools on popular open source platforms that individuals might use when developing AI.
Commenting on AI-related risk, Gianpietro Cutolo, Cloud Threat Researcher , Netskope Threat Labs, and author of the report, said: “Organizations in Asia are making strides in eliminating genAI risk, but that doesn’t mean the attack surface is shrinking, it’s only shifting with the emergence of new AI usage and deployments. Innovation is crucial and organizations based in Asia should explore the potential of AI to generate efficiencies. But letting AI innovation spread without security oversight poses major cyber and data security risks, and security teams should prioritise eliminating shadow AI by gaining visibility and applying controls over AI deployments among their workforce, beyond just generative AI applications.”
Cloud security risks
Beyond AI, the report discusses the enduring threat of cloud-delivered malware and the data security risks related to the widespread use of personal cloud apps in the workplace:
- More than one in ten organizations (11%) are detecting employees downloading malware from Microsoft OneDrive each month, closely followed by GitHub (10%) and Google Drive (7.4%). While these cloud providers actively remove malicious content, the brief period before detection often provides attackers with a sufficient window of opportunity.
- Employees based in Asia use personal applications extensively in the workplace. The most used are LinkedIn, and personal ChatGPT and Google Drive accounts, used on average in 84%, 82% and 82% of organizations respectively each month. Instances of employees attempting to share sensitive data via personal cloud applications are regularly observed, with regulated data accounting for almost half (44%) of data policy violations, followed by source code (33%) and intellectual property (14%).



