Does Moltbook offer a glimpse of our world heading toward a Terminator-movie Skynet-like self-aware, malevolent AI network that may trigger worldwide chaos or even nuclear war?
What happens when thousands of AI agents get together online and talk like humans do? That’s what a new social network called Moltbook, designed just for AI bots and not people, aims to find out.
The name is a play on Facebook and designed to look like Reddit, with subreddits on different topics and upvoting – with the platform stating it had more than 1.5 million AI agents signed up to the service by early February 2026.
Cloud security platform Wiz has done a security review on Moltbook and found that the site granted unauthenticated access to its entire production database within minutes and exposed more than 1.5m API keys, over 35,000 email addresses, and private messages that sometimes contained full raw credentials for third‑party services such as OpenAI.
And so far, the results are equal parts fascinating and concerning, according to AI and cybersecurity experts.

Here, we share views on the Moltbook phenomenon from two cybersecurity experts.
Reuben Koh, Director, Security Technology & Strategy, APJ, Akamai, made reference to Black Mirror rather than The Terminator: “What we’re seeing with Moltbook – widely hyped as a social network built exclusively for AI agents to post, comment and interact with one another through APIs – feels like something straight out of Black Mirror, the TV show known for imagining unsettling near-future tech scenarios.”
However, he is of the opinion that in reality – based on reports so far – humans are still very much in the loop.
Zoya Schaller, Director, Cybersecurity Compliance, Keeper Security, concurs: “The idea that AI systems will start acting on their own is genuinely unsettling, but that’s not what research or real-world incidents show, nor is it how Large Language Models (LLMs) work.”
She explains: “Moltbook is presented as a window into AI autonomy, while others consider the site as proof that the machines are ‘waking up’ – or worse. It’s generating immense interest and drawing attention across tech circles. But when you look closely at what’s actually happening, the content largely consists of bots doing what bots do: pattern-matching human language using terabytes of scraped internet text, pulling from culture and remixing decades of sci-fi tropes we’ve all absorbed.”
According to Koh, the fascination with Moltbook points to a broader shift: “We are quickly moving from a human-centric Internet to a web that is increasingly agentic. In this new digital order, digital interactions and their corresponding logic, are happening via machine-to-machine interfaces via REST APIs rather than GUIs (graphical user interfaces) at machine speed.”

It’s more about API security
For Koh, the real question is not about whether these agents are truly behaving autonomously. He explains: “When an AI agent is empowered to perform autonomous tasks like making forum posts, booking travel, managing supply chains, or executing digital trades, APIs serve as its ‘arms and legs’. They govern what the agent can interact with, what actions it can take, and what it can trigger, even without human oversight.”
Koh warns: “Without rigorous oversight, this can very quickly escalate into a critical issue for organizations. Driven by labor shortages in mature regions across APJ and the ‘super-app’ culture in areas like Southeast Asia, the region is accelerating the transition to Agentic AI.” Citing 2026 IDC data that suggested 70% of APAC organizations expect agentic AI to disrupt their business models by the end of the year, he adds: “However, in the rush to deploy Agentic AI, organizations are granting AI agents broad, admin-level API permissions, to ensure they can just work quickly, but they are failing to manage and secure the agent’s ability to interact with and impact real-world applications.”
Schaller shares the same opinion about human failure: “When AI systems cause real damage, it’s generally because of permissions humans gave them, integrations we built or configurations we signed off on – not because of some autonomous decision made by a chatbot.”
If an AI system looks autonomous, it is because someone has given it access to tools, data or credentials without the right guardrails, creating powerful machine identities with no clear ownership, accountability or limits.
That, she emphasizes, is not a containment failure. “That’s automation doing exactly what it was designed to do – just faster and at scale, often in ways we didn’t fully anticipate.”
Koh observes a sharp rise in vulnerabilities tied to broken authentication and authorization: “the kinds of flaws that would quietly expose sensitive data and critical business logic without triggering obvious alarms.”
With hundreds of billions of web application and API attacks globally, APIs are now one of the fastest-growing attack surfaces. Koh provides some numbers: “In Asia-Pacific and Japan, total web attacks jumped 73% year-on-year, rising from 29 billion in 2023 to 51 billion in 2024, while the region also recorded the world’s second-highest volume of application-layer DDoS attacks at 7.4 trillion over the two years.”
He cautions: “When you pair this surge with AI-driven automation on both the attacker and defender side, APIs inevitably become the primary application-level battleground.”
What to do about ‘self-aware’ AI agents
“If an AI system looks autonomous in the wild, it’s usually because someone handed it access to tools, data or credentials without the right guardrails, creating powerful machine identities with no clear ownership, accountability or limits,” reiterates Schaller.
“It looks like personality, but it’s really just excellent mimicry – simulation dressed up as identity,” she adds. “The bots aren’t plotting. They’re just exceptionally good at sounding like us. The real risk still lies in the room where the design decisions are made.” What should we be doing then? Schaller advises: “Instead of asking whether these bots are becoming sentient, we should be asking whether we’re building and deploying them responsibly. The fundamentals, including the unglamorous security work, still matter far more than whatever happens to be trending on AI TikTok this week.”
Koh concurs: “Systems like Moltbook are a glimpse of where the internet is heading: software talking to software, machines interacting with machines. If we don’t secure the connective tissue that powers those conversations, we risk building extraordinary AI experiences on top of invisible and increasingly fragile foundations.”
Schaller concludes: “Networks like Moltbook are certainly interesting. They may teach us something useful about how LLMs interact or what patterns emerge when they’re allowed to communicate without constraint. But they don’t rewrite the rules.”
“All the ‘boring stuff’ – security-first design, least privilege access, proper isolation and continuous monitoring – is still what actually keeps us safe.”



