State-sponsored actors from multiple countries continue to misuse generative AI tools to enhance all stages of their operations.
In this interview, Steve Ledzian, CTO, Google Cloud Security and Mandiant, JAPAC, shared findings from Google’s Threat Intelligence Group (GTIG) on the use of generative AI by threat actors in the region.
We zoomed in on the state of state-sponsored hacking groups in the region weaponizing AI in the various stages of their operations, and how cyber-defenders can mitigate for resiliency.
How are state-sponsored hacking groups weaponizing generative AI?
Ledzian: In January 2025, GTIG published a blog “Adversarial Misuse of Generative AI” providing a detailed analysis of the ways they saw threat actors attempting to interact with Google’s Gemini for malicious purposes.
At the time, most of the activity GTIG observed was around attackers finding productivity gains as attackers had not yet developed novel capabilities using AI.
Just short of a year later (November 2025), GTIG released a second blog “GTIG AI Threat Tracker: Advances in Threat Actor Usage of AI Tools” detailing how threat actors had advanced over the last 11 months. In this blog, GTIG identified a shift that occurred within the last year: adversaries were no longer leveraging AI just for productivity gains, they were deploying novel AI-enabled malware in active operations.
This marked a new operational phase of AI abuse, involving tools that dynamically alter behavior mid-execution.
In addition to threat actors attempting to misuse Gemini, GTIG observed AI tools being sold in underground forums which are providing capabilities such as deepfake and image generation, malware development, phishing, research and reconnaissance, technical support and code generation, and vulnerability exploitation.
In Asia Pacific, how is generative AI supercharging every stage of the attack lifecycle, and across language borders?
Ledzian: In both the January and November blogs, GTIG reported observing threat actors using AI across all stages of the attack lifecycle. These stages include reconnaissance, initial compromise, establishing a foothold, escalating privileges, moving laterally, maintaining presence and completing the mission.
In 2025, GTIG observed state-sponsored actors from multiple countries continue to misuse generative AI tools to enhance all stages of their operations, from reconnaissance and phishing lure creation to command and control development and data exfiltration.
Phishing lures are a particularly interesting use case as many organizations train their staff to look out for spelling or grammatical errors as a tip off that an email may be inauthentic and possibly from a non-native speaker in a distant land trying to scam the recipient. This changes though as attackers move to leverage AI as generative AI can be used to build beautifully articulate messages in any language.
How are attackers evading safety and security filters in the AI-versus-AI cybersecurity race?
Ledzian: Like humans, generative AI can be susceptible to social engineering. If a threat actor asks AI to help find vulnerabilities on a compromised system, AI’s guardrails and security filters will likely kick in and deny the request.
However, if a student participating in a capture-the-flag (CTF) exercise, a gamified cybersecurity competition to enhance skills and techniques, asks the exact same question in the context of getting help in a CTF exercise, this could be an absolutely benign request.
This nuance in AI use highlights critical differentiators in benign vs. misuse of AI that we continue to analyze to balance Gemini functionality with both usability and security as goals.
What can cyber-defenders do to mitigate the weaponization of AI and to ensure resiliency for their organizations?
Ledzian: AI workloads are susceptible to specific types of attacks and need to be protected accordingly. In March 2025, Google announced AI Protection to help organizations keep their AI workloads secure.
Google first spoke publicly about its own internal AI red team back in 2023. For governance, Google has also introduced its Secure AI Framework (SAIF) which is freely available, offers a Risk Self-Assessment and has recently been updated to include coverage for Agentic AI.
Aside from protecting their own AI workloads, organizations need to defend themselves against AI-powered attacks. This is best done with AI-powered defenses. Google announced “The dawn of agentic AI in security operations” in April 2025.
Take the Google Security Operations solution, a cloud-native SaaS solution, as an example. It is used to power organizations and Security Operations Centers (SOCs) with agentic capabilities. This will allow cyber-defenders to accomplish what they have never been able to accomplish before, namely the automation of alert triage and investigation at scale, among other capabilities.
The agentic SOC will run semi-autonomously providing cyber-defenders the bandwidth they need to counter an ever set of evolving AI-based threats from attackers.



