The AI arms race is reshaping the threat landscape. Can defenders gain ground against their unregulated opponents in this high-stake competition?
Governments, financial institutions and enterprises across Asia Pacific are now competing in the highest-stakes competition ever — against AI-empowered cyber-attackers in an AI decathlon where every event tests resilience, speed, and trust.
Attackers are harnessing generative AI to craft hyper-personalized fraud, automate social engineering at scale, and manipulate financial data, while defenders race to leverage AI for real-time detection, response, and governance.
How is the AI arms race reshaping the threat landscape? Can defenders gain ground against their unregulated opponents? How do we train our security and risk teams to stay compliant, resilient, and ahead in the next event?
CybersecAsia.net gleans some insight and answers from Ofir Israel, VP of Threat Prevention and AI, Check Point Software Technologies:

Ofir Israel, VP, Threat Prevention & AI – Check Point Software Technologies, CPX2025
How is AI drastically altering the cyberthreat landscape?
Ofir: AI is fundamentally changing the calculus of a cyberattack. It is democratizing sophistication and automating speed at scale. For attackers, AI acts as a force multiplier:
- Massive scale and personalization: Generative AI allows for the creation of hyper-realistic, targeted content. Phishing emails and social engineering attacks can be crafted perfectly to an individual’s role, writing style, or current business context, making them virtually indistinguishable from legitimate communications.
- Faster and evasive malware: AI can automate the vulnerability discovery process, finding exploits that would take human hackers weeks in a matter of minutes. It also enables the creation of polymorphic malware that constantly adapts its code and behavior to evade traditional, signature-based security systems.
- Lowering the barrier to entry: Malicious AI models like WormGPT or FraudGPT are being sold as “Cybercrime-as-a-Service” on the dark web. This empowers amateur threat actors with the capabilities of a highly sophisticated, nation-state-level attacker.
The “AI versus AI” competition is not a game. What are the stakes — financial, economic, political etc.?
Ofir: The stakes are profound and cover every layer of society:
- Financial & economic: We’re seeing more intense and sophisticated attacks, with ransomware alone surging. Mega-breaches often result in financial loss and also in massive reputational damage and a loss of customer trust. For organizations, a successful breach can directly threaten long-term business viability.
- Political & geopolitical: State-affiliated threat actors are leveraging AI to advance cyber influence operations, espionage, and attacks on Critical Information Infrastructure (CII). The integrity of national digital economies, the security of essential services (power grids, water, and finance), and the stability of geopolitical relations are on the line.
- Societal: AI-powered fraud and synthetic media (deepfakes) erode digital trust, making it harder for people and businesses to trust online identities and communication. This may impact everything from financial transactions to democratic processes.
Attackers are harnessing AI for hyper-personalized fraud, automated social engineering at scale, and manipulation of financial data. Unlike defenders, cybercriminals are not restricted by geographical borders, regulations, or ethical concerns. How serious is the AI divide between attackers and defenders?
Ofir: The AI divide is serious but not insurmountable. The fundamental problem is the asymmetry of the conflict. Attackers only need to be successful once, while defenders must be successful every single time. Offensive AI gives the threat actors an extreme advantage in speed and scale. They can fully automate reconnaissance, initial compromise, and payload generation with minimal human oversight, operating at machine speed.
For defenders relying on manual triage or fragmented, legacy security systems, this speed disparity means they are guaranteed to be left behind. Manual triage is no longer viable. The only way to counter an AI-powered attack is with an equally sophisticated, autonomous AI defense fabric.
The key to closing this divide lies in the operationalization of AI for defense, moving from simple detection to autonomous prevention. This requires a unified platform approach to ingest and correlate massive amounts of data in real time.
How effectively are defenders leveraging AI for real-time detection, response, and governance? Where are they gaining ground in this AI arms race?
Ofir: Defenders are making significant gains, primarily by leveraging AI to act as a force multiplier for the human security analyst and by shifting the security paradigm from reactive to anticipatory. They are gaining ground in several critical areas:
- Autonomous threat prevention: Advanced AI-driven security platforms are using machine learning to establish a baseline of normal user and network behavior (UEBA – User and Entity Behavior Analytics). Any deviation, such as an unusual login from a new location or accessing a sensitive file at an odd hour, is flagged and automatically prevented in real-time, often before the attack fully deploys. This includes defending against zero-day attacks by focusing on behavior, not just known signatures.
- Accelerated incident response: AI is integrated with Security Orchestration, Automation, and Response (SOAR) platforms to automate repetitive, time-intensive tasks. This includes rapid containment by automatically isolating infected endpoints or blocking malicious IP addresses. By triaging and prioritizing the thousands of alerts daily (many of which are false positives), AI frees up human analysts to focus on complex, strategic investigations.
- Vulnerability management: AI can proactively analyze code, network configurations, and system architectures to identify misconfigurations and vulnerabilities before they are exploited. More importantly, it can prioritize remediation based on exploitability and asset criticality, optimizing the use of scarce resources.
What should organizations in Asia Pacific do to build resilience and trust in this AI-powered cybersecurity decathlon?
Ofir: Organizations in the Asia Pacific region, which may be targets for sophisticated threats including nation-state-backed APTs, must adopt a proactive, intelligence-led defense strategy centered around three key pillars:
- Embrace a consolidated, AI-powered platform: Move away from a patchwork of multiple, fragmented, “best-of-breed” point solutions. This complexity is a defender’s biggest vulnerability. Invest in a unified, end-to-end AI-powered platform that provides collaborative threat prevention across the entire attack surface: network, cloud, endpoint, and mobile. The goal is a “single-pane-of-glass” management that delivers AI-driven automation and real-time threat intelligence.
- Prioritize securing AI & identity: As defenders rush to adopt generative AI, they must enforce guardrails for its use. This includes securing the AI supply chain, implementing Data Loss Prevention (DLP) for Generative AI services, and ensuring continuous AI risk assessment. More fundamentally, organizations must secure the human factor. Phishing-resistant Multi-Factor Authentication (MFA) and a strict Zero Trust Network Access (ZTNA) model are non-negotiable, as identity is the new perimeter.
- Invest in threat intelligence and skills: Organizations should boost their investments in localized threat intelligence to understand and anticipate region-specific threats and geopolitical risks. Since human teams cannot keep pace with machine-speed attacks, organizations may leverage AI to supercharge their security teams, reducing investigation times from days to minutes. Simultaneously, prioritize upskilling the board and employees on new cyber risks, fostering a security-first culture from the top down.
Collaboration is another crucial element in cybersecurity. No single entity can stand alone against modern cyberattacks. The digital borders are porous, and threat actors exploit weaknesses across the entire ecosystem, from government networks to private-sector infrastructures. Collaboration between government agencies, the cybersecurity industry, and end-users to improve threat detection, investigation, and incident response is important.
In Singapore, for instance, the recent announcement of the establishment of the Digital Defense Hub under MINDEF to counter advanced digital threats like APTs targeting government and critical infrastructures is an excellent example of a robust and proactive measure through collaboration.



