Why AI resilience and data sovereignty are now top priorities for organizations in Asia Pacific.
The last week of January 2026 was Data Privacy Week, and we took the opportunity to find out what 3 industry experts in the region have to say about data privacy in the age of AI.

Gregory Statton, Chief Technology Officer, Asia Pacific and Japan, Cohesity:
Recent data exposure across the ASEAN and wider APAC region is a reminder that the threat landscape has shifted. Cybercriminals are no longer just attacking systems – they are targeting the foundational data that underpins our communities. This is not simply a security issue; it’s a signal that we must rethink how AI is used to protect our most sensitive data.
The starting point for data privacy today should be simple: ask not what you can do with AI, but what AI can do for you. In 2026, AI must move beyond hype and generic tools and be treated as a practical problem-solver. Organizations that focus on real business value (with data integrity and privacy built in from the ground up) will be the ones that emerge as winners in the era of AI.
Across APAC, interest in sovereign AI is accelerating as organizations recognize the importance of keeping data within corporate and geographic borders. A sovereign-first approach improves control, compliance, and strategic autonomy, but success depends on balance. Regulations must remain elastic enough to enable innovation without creating isolated data silos or inhibiting creativity.
Effective data protection also requires a shift away from one-size-fits-all platforms. AI now enables highly targeted, department-specific solutions where access is limited to those who truly need it. This approach reduces risk while improving speed and precision.
Finally, technology alone is not enough. Cybercriminals exploit people as much as systems. Building real resilience means empowering staff, students, and stakeholders to actively participate in data privacy. When human judgment is combined with AI-driven precision, organizations gain a level of protection that generic security tools simply cannot provide.
At the heart of AI lies data. For AI systems to operate effectively, they must be trained on trusted, high-quality data free from tampering. Embedding privacy-by-design principles into the workflow processes and adopting privacy-enhancing technologies such as encryption and access controls, in parallel with continuous employee education, are all important steps in laying the foundation for AI to become the strongest asset in protecting privacy – not our greatest risk.
Posts

Wee Tee Hsien, Chief Executive Officer, FUJIFILM Business Innovation Singapore:
At the heart of AI lies data. For AI systems to operate effectively, they must be trained on trusted, high-quality data free from tampering. Embedding privacy-by-design principles into the workflow processes and adopting privacy-enhancing technologies such as encryption and access controls, in parallel with continuous employee education, are all important steps in laying the foundation for AI to become the strongest asset in protecting privacy – not our greatest risk.
Finally, accountability must sit firmly with senior leadership. The management should actively oversee AI and data privacy risks, with clear ownership, metrics, and escalation mechanisms. AI systems evolve over time, so privacy risk must be continuously monitored through audits, testing, and independent assessments. When accountability is clear and governance is strong, privacy becomes a foundation for trust rather than a constraint on growth. Organizations that get this right not only reduce risk – they also strengthen long-term customer confidence and gain competitive advantage in today’s digital economy.

Rachel Ler, Area Vice President, Asia, Fastly:
AI adoption is accelerating across Asia, but the fundamentals of data privacy remain unchanged – accountability and transparency are paramount. Organizations are still responsible for how data is collected, processed, and protected, whether it is handled by AI systems, cloud platforms, or channel partners.
Equally important is ensuring employees who handle personally identifiable information (PII) understand what can and cannot be shared with AI tools. This requires clear, enforceable AI policies that define approved use cases and explicitly prohibit the use of customer or sensitive data when interacting with AI models.
In a region marked by diverse data sovereignty and regulatory requirements, knowing where data is processed and who can access it is critical. When data is used beyond its intended purpose or without transparency, trust is quickly eroded.
Privacy by design should therefore be embedded into AI strategies from the outset. This includes limiting data collection, improving visibility across hybrid and multi-cloud environments, and enforcing consistent security controls. Channel partners play a key role as trusted advisors, helping organizations design secure, compliant architectures and clearly communicate data practices.
Securing and processing data at the edge and closer to users helps organizations reduce risk, meet local requirements, and deliver trusted digital experiences across Asia.



