A press release sparks alarm: but cyber experts emphasize governance gaps, operational flaws over misconfiguration hype in regional grids and ports.
In February 2026, a technology consultancy had warned, in a press release (not a formal study with full methodological disclosure), that “by 2028, misconfigured AI in cyber-physical systems will shut down national critical infrastructure in a G20 country.”
The accompanying commentary had argued that the next major infrastructure failure may not come from a hostile actor, but from “a well-intentioned engineer, a flawed update script or even a misplaced decimal point.”
The warning echoes a growing narrative in cybersecurity circles: that poorly governed AI could trigger cascading failures across power grids, ports or telecommunications networks. Yet critics say such predictions risk overstating the role of AI itself while overlooking deeper operational and governance issues already present in modern digital infrastructure. Also, the opinion appeared not in a detailed research study, but in a press release.
While such announcements often highlight provocative future scenarios, critics note that predictions presented without transparent methodology can easily be amplified by vendor marketing ecosystems.
Across the Asia Pacific region (APAC) — where utilities, ports and smart-city systems are rapidly adopting AI-assisted automation — the real risk may lie less in machine intelligence than in how these systems are deployed, monitored and governed.
Vendor warnings in context
The consultancy’s prediction highlights a genuine concern. As AI systems become embedded in cyber-physical environments — such as demand-balancing algorithms in power grids or predictive analytics in logistics hubs — small errors in configuration or data interpretation could propagate quickly.
Control-system researchers have long noted that complex networks with automated feedback loops can amplify small anomalies if safeguards are poorly designed. In critical infrastructure, such failures can escalate rapidly because operational technology (OT) environments interact directly with physical processes.
But historical incidents show that infrastructure disruptions rarely originate from AI errors alone. The 2021 shutdown of the Colonial Pipeline in the USA, for example, had occurred after a ransomware attack on the company’s IT systems. Operators had halted pipeline operations as a precaution while assessing the potential spread of the attack into operational networks.
The event illustrates how digital vulnerabilities can disrupt physical systems even without direct manipulation of industrial controls. In this sense, the incident serves as a reminder that the intersection of IT, automation and operational infrastructure remains a fragile boundary.
Operational risks behind the hype
Security practitioners argue that the more immediate challenge lies in operational complexity. According to Darren Guccione, CEO, Keeper Security, AI ecosystems often rely on sprawling networks of automated accounts and integration points. “AI systems depend on automation scripts, APIs and service accounts interacting across multiple platforms,” Guccione has noted in industry discussions. “If those identities are poorly governed, a single misconfiguration can cascade across the environment.”

Darren Guccione, CEO, Keeper Security
These so-called non-human identities (NHIs) service accounts, API tokens, robotic process automation agents and other automated credentials — and now form a substantial share of modern cloud environments. Security researchers increasingly warn that these machine identities can accumulate excessive privileges, creating a large attack surface if not carefully managed.
The OWASP recently highlighted the issue in its Non-Human Identities Top 10 project, which identifies poorly governed machine credentials as a major emerging risk in cloud and AI-driven architectures.
In practice, this means that an AI error rarely acts alone. Instead, the damage often results from the surrounding ecosystem — unmonitored automation, over-privileged accounts or poorly segmented networks.
AI risks in operational technology
Concerns are particularly acute in operational technology environments. Industrial control systems typically prioritize reliability and safety over rapid software changes. Introducing machine-learning models into these environments — such as predictive maintenance algorithms or automated decision systems — adds another layer of complexity.
Rob Demain, CEO, e2e-assure, had warned: “AI could introduce model drift and mis-generalization into operational technology (OT) environments, potentially leading to unsafe decisions and safety-process bypasses if AI recommendations override established manual checks.”

Rob Demain, CEO, e2e-assure
Similarly the Cybersecurity and Infrastructure Security Agency (CISA) has warned that organizations deploying AI in critical infrastructure should treat such systems as operational risks rather than purely IT innovations. In guidance on securing industrial control systems, the agency stresses the importance of maintaining manual overrides and safety controls when introducing automation.
Model drift, data bias and unexpected operating conditions can all degrade AI performance over time. In safety-critical systems, such degradation must be detected quickly to prevent automated decisions from bypassing established safeguards.
APAC’s evolving regulatory landscape
Governments across APAC and Japan are increasingly aware of these challenges:
- Australia’s 2026 critical infrastructure bill emulates EU NIS2/DORA with resilience mandates, though voluntary compliance hampers enforcement
- Japan’s tiered regulations classify infrastructure AI as “high-risk,” mandating audits
- Singapore invests in proactive response via its Model AI Governance Framework, embedding FEAT (Fairness, Ethics, Accountability, Transparency) principles and AI Verify toolkit to stress-test drifts in high-stakes fintech/energy deployments
Meanwhile, international regulatory trends are moving toward stronger oversight of digital infrastructure. The European Union has introduced new cyber-resilience rules such as the NIS2 Directive, DORA and the AI Act, which require critical sectors to strengthen incident reporting and risk management processes.
Although APAC regulatory regimes vary widely, the underlying goal is similar: ensuring that automated technologies — including AI — are deployed within robust governance frameworks.
Frameworks for managing AI risk
Industry standards bodies have also begun addressing these issues.
The National Institute of Standards and Technology (NIST) has released its AI Risk Management Framework to help organizations identify and mitigate risks across the lifecycle of AI systems (https://www.nist.gov/itl/ai-risk-management-framework).
The framework emphasizes several principles particularly relevant to critical infrastructure:
- continuous monitoring of deployed models
- strong identity and access management
- human oversight for high-risk decisions
- rigorous testing of systems before deployment
In practice, this often involves the use of “digital twins”. Such an approach can help engineers detect unexpected interactions between software, sensors and physical processes before those interactions occur in real infrastructure.
Beyond the hype
Predictions of AI-triggered infrastructure collapse make for dramatic headlines. However, many experts believe the more realistic challenge lies in operational discipline rather than technological failure.
AI systems rarely operate in isolation. They function within sprawling ecosystems of cloud services, automated identities and interconnected networks. Without careful governance, even routine errors can escalate into large-scale disruptions.
As Guccione has argued in his discussions about AI security, “National resilience in the AI era will depend less on model sophistication and more on operational discipline. Boards and regulators should treat AI configuration governance as a core resilience metric, not a technical afterthought.”
For policymakers and infrastructure operators in APAC and beyond, the lesson may be straightforward: treat AI not as an existential threat, but as another powerful tool that must be deployed with rigorous safeguards.
If the region succeeds in strengthening identity governance, operational monitoring and regulatory oversight, the specter of catastrophic “misconfigured AI” may remain more hype than reality.


