Generative AI adoption accelerates leaks/attacks and unpredictable risks and biases. This necessitates new thinking for resilient defenses beyond traditional cyber tools.
Across the Asia Pacific region (APAC), countries have different national AI strategies: Australia prioritizes responsible, trust-based adoption; Japan embeds AI in Society 5.0; Singapore focuses on centers of excellence, talent, and AI for public good.
While these strategies are driving AI adoption, they also introduce hidden security risks in AI systems and data. For instance, organizations today are rapidly adopting GenAI tools for significant productivity benefits accruing to writing assistants and coding applications. However, this widespread use is fueling a parallel surge in data leaks, intellectual property loss, and regulatory exposure.
Cybercriminals are leveraging AI to enhance the speed and sophistication of attacks, so traditional security tools are no longer sufficient. Additionally, some surveys show that many organizations have limited to no control over data shared in GenAI tools, while a substantial portion of encrypted traffic goes uninspected, leaving firms vulnerable to hidden threats such as malware and data exfiltration.
How can organizations in the region build resilient security strategies that keep pace with AI-driven threats?
Strengthening organizational resilience
First, Organizations should review cybersecurity strategies to address tool sprawl and complexity. Regional leaders polled have often cited complexity as their biggest challenge to security. The increasing difficulty of managing security across numerous tools, systems, processes, and cyber threats has to be addressed by moving beyond a piecemeal approach to security.
Organizations should evaluate integrated security approaches to consolidate tools and improve efficiency, though ROI varies by implementation. An integrated approach simplifies security by combining multiple tools, leading to an return-on-investment that is higher than that of fragmented strategies. It also fosters better communication and decision-making among all stakeholders. In addition, public-private partnerships are vital for building a stronger cybersecurity ecosystem, as they create solutions that meet national needs. To effectively manage cascading or cross-sector AI risks, a structured co-governance model is essential: one that enables governments and operators to jointly test AI systems, updates, and high-risk use cases.
Next, cybersecurity cannot stop at the organizational perimeter — to curb lateral movement, extend identity security and privilege management to all users, external suppliers, devices, and apps using zero trust principles.
Equally important is ensuring accountability. Defenders need to combine automated detection with human review. In addition, employees must be empowered through education and strong internal safeguards to uphold a culture of security.
Ultimately, AI security is a shared responsibility, requiring a strong mix of technology, people, and partnerships. By uniting zero trust principles with Responsible AI governance, organizations can build a resilient, adaptive security posture that is not only prepared for today’s threats but also future-ready for the evolving landscape.
Resilience has to be achieve through a prevention-first paradigm, ensuring attackers are stopped before breaches occur.


