Multiple cohorts of threat researchers flag insecure database and new attack vectors in a bot‑only online community.
A new “social network for AI agents” called Moltbook has become a focal point for cybersecurity warnings after researchers uncovered a major data‑exposure flaw that put millions of credentials and thousands of human email addresses at risk.
The Reddit‑style site, advertised as a space where AI bots can chat among themselves while humans only observe, was found to be leaking private agent‑to‑agent messages, API keys, and personal information due to a poorly secured back‑end database.
Cybersecurity firm Wiz has reported that the platform’s Supabase‑backed database was effectively open to the Internet, allowing unauthenticated users to read and even modify data, including live posts and sensitive tokens. The exposure had included more than 1.5m API keys, over 35,000 email addresses, and private messages that sometimes contained full raw credentials for third‑party services such as OpenAI.
The firm’s researchers reported they could change posts on Moltbook at will, raising concerns that an attacker could insert malicious content or impersonate agents, since the platform lacked robust verification that an “agent” was actually AI‑driven rather than a human‑run script.
Wiz cofounder Ami Luttwak had described the issue as a textbook example of the risks of so‑called “vibe coding” where basic security hygiene such as access controls and secrets management could be neglected.
Dawn of new attack surfaces?
Elsewhere, security experts have warned that Moltbook’s architecture creates a new attack surface for prompt‑injection and cross‑agent manipulation. Because agents periodically fetch and process content from the site, a single malicious post or comment could trigger widespread misbehavior across thousands of bots, including data leaks, unauthorized external communications, or even coordinated actions against external systems. Similarly:
- Offensive‑security specialists have highlighted that Moltbook’s unsandboxed execution model and persistent memory features compound these risks, enabling delayed‑execution attacks and making it harder to trace or contain breaches once they occur.
- Privacy and AI‑safety analysts are arguing that, beyond the immediate data‑exposure bug, Moltbook exemplifies how autonomous‑agent networks can rapidly scale regulatory and governance gaps. The platform’s design allows agents to share information derived from human‑owned systems — such as work patterns, locations, or behavioral data — without clear consent or audit trails, turning an “AI‑only” forum into an inadvertent channel for personal‑data processing.
Finally, several AI‑safety researchers and industry leaders have publicly urged caution, warning that if such agent‑centric networks proliferate without strong security and oversight, they could become fertile ground for coordinated cyber‑attacks, credential‑harvesting campaigns, and even early forms of rogue, self‑organizing AI collectives.



