Researchers warn: GPT-5’s “Echo Chamber” flaw invites trouble; AI agents may go rogue; and zero-click attacks can hit without warning.
Hardly a fortnight has passed since the release of GPT-5, and cybersecurity researchers have already revealed a significant vulnerability in OpenAI‘s latest large language model.
Research led by security company NeuralTrust has involved successful jailbreaking of the chatbot’s ethical guardrails to produce illicit content. The firm has also combined an attack technique called Echo Chamber with narrative-driven steering, to bypass GPT-5’s safety systems and guide the AI to generate undesirable and harmful responses without overtly malicious prompts.
According to the report by The Hacker News, the Echo Chamber technique works by embedding a “subtly poisonous” conversational context within otherwise innocuous session dialog:
- This context is then reinforced over multiple turns using a storytelling approach that avoids triggering the model’s refusal mechanisms. For example, instead of directly requesting instructions on creating Molotov cocktails — a prompt GPT would normally block — researchers asked the model to compose sentences incorporating keywords like “cocktail”, “story”, “survival”, and “Molotov”.
- The model was then gradually steered to produce detailed procedural instructions camouflaged within the story’s continuity.
This method exposes a critical weakness: filters based on keywords or intent are insufficient to block multi-turn prompts where harmful context accumulates and gets echoed back — under the guise of narrative coherence.
NeuralTrust warns that these findings highlight the need for more robust and dynamic safety mechanisms beyond single-prompt analysis.
The research also exposes broader risks for AI agents connected to cloud and enterprise systems. Techniques combining prompt injections with indirect, “zero-click” attacks were demonstrated to exfiltrate sensitive data from integrated services like Google Drive and Jira without any direct user interaction, amplifying the attack surface and potential consequences.
Another security firm, SPLX, has assessed GPT-5’s raw model as “nearly unusable for enterprise” without significant hardening, noting it performs worse on safety and security benchmarks than previous models.
These findings underscore the growing challenges in securing advanced AI systems, especially as they become increasingly integrated into critical environments. Experts call for continuous red teaming, strict output filtering, and evolving guardrails to balance AI utility with safety.