Cybersecurity News in Asia

RECENT STORIES:

SEGA moves faster with flow-based network monitoring
Futurise Unveils 2024 Impact Report: Shaping Tomorrow’s Regulato...
Raythink Technology Showcases Next-Generation All-Round Security Syste...
Android trojan mimics human typing traits to evade behavioral detectio...
Embedding cybersecurity culture in financial institutions: lessons in ...
Upgrading biometric authentication system protects customers in the Ph...
LOGIN REGISTER
CybersecAsia
  • Features
    • Featured

      Embedding cybersecurity culture in financial institutions: lessons in leadership, collaboration, and cyber resilience

      Embedding cybersecurity culture in financial institutions: lessons in leadership, collaboration, and cyber resilience

      Thursday, October 30, 2025, 11:37 AM Asia/Singapore | Features, Newsletter
    • Featured

      Biometrics and the digital identity crisis today

      Biometrics and the digital identity crisis today

      Tuesday, October 28, 2025, 3:30 PM Asia/Singapore | Features
    • Featured

      Collaboration and data security for today’s agentic workspace

      Collaboration and data security for today’s agentic workspace

      Wednesday, October 22, 2025, 1:42 PM Asia/Singapore | Features, Tips
  • Opinions
  • Tips
  • Whitepapers
  • Awards 2025
  • Directory
  • E-Learning

Select Page

News

Threat researchers uncover jailbreak exposing deep safety vulnerabilities in latest AI model

By CybersecAsia editors | Thursday, August 14, 2025, 2:39 PM Asia/Singapore

Threat researchers uncover jailbreak exposing deep safety vulnerabilities in latest AI model

Researchers warn: GPT-5’s “Echo Chamber” flaw invites trouble; AI agents may go rogue; and zero-click attacks can hit without warning.

Hardly a fortnight has passed since the release of GPT-5, and cybersecurity researchers have already revealed a significant vulnerability in OpenAI‘s latest large language model.

Research led by security company NeuralTrust has involved successful jailbreaking of the chatbot’s ethical guardrails to produce illicit content. The firm has also combined an attack technique called Echo Chamber with narrative-driven steering, to bypass GPT-5’s safety systems and guide the AI to generate undesirable and harmful responses without overtly malicious prompts.

According to the report by The Hacker News, the Echo Chamber technique works by embedding a “subtly poisonous” conversational context within otherwise innocuous session dialog:

  • This context is then reinforced over multiple turns using a storytelling approach that avoids triggering the model’s refusal mechanisms. For example, instead of directly requesting instructions on creating Molotov cocktails — a prompt GPT would normally block — researchers asked the model to compose sentences incorporating keywords like “cocktail”, “story”, “survival”, and “Molotov”.
  • The model was then gradually steered to produce detailed procedural instructions camouflaged within the story’s continuity.

This method exposes a critical weakness: filters based on keywords or intent are insufficient to block multi-turn prompts where harmful context accumulates and gets echoed back — under the guise of narrative coherence.

NeuralTrust warns that these findings highlight the need for more robust and dynamic safety mechanisms beyond single-prompt analysis.

The research also exposes broader risks for AI agents connected to cloud and enterprise systems. Techniques combining prompt injections with indirect, “zero-click” attacks were demonstrated to exfiltrate sensitive data from integrated services like Google Drive and Jira without any direct user interaction, amplifying the attack surface and potential consequences.

Another security firm, SPLX, has assessed GPT-5’s raw model as “nearly unusable for enterprise” without significant hardening, noting it performs worse on safety and security benchmarks than previous models.

These findings underscore the growing challenges in securing advanced AI systems, especially as they become increasingly integrated into critical environments. Experts call for continuous red teaming, strict output filtering, and evolving guardrails to balance AI utility with safety.

Share:

PreviousWhen talking sense into AI power mongers fails, talk $$$: A message from AI
NextONESECURE Unveils Innovative WEBYITH Service to Combat Web Defacement and Web Spoofing

Related Posts

Healthcare institutions to get 6 months of endpoint security software free

Healthcare institutions to get 6 months of endpoint security software free

Thursday, March 26, 2020

Cybercriminals resort to nesting multiple ZIP files to evade email scanners

Cybercriminals resort to nesting multiple ZIP files to evade email scanners

Thursday, November 14, 2024

Following Triton and Stuxnet, new ICS malware targets critical infrastructure

Following Triton and Stuxnet, new ICS malware targets critical infrastructure

Tuesday, April 19, 2022

Watch out for Kubernetes misconfigurations: data analysis shows H1 2024 cloud risks

Watch out for Kubernetes misconfigurations: data analysis shows H1 2024 cloud risks

Thursday, February 27, 2025

Leave a reply Cancel reply

You must be logged in to post a comment.

Voters-draw/RCA-Sponsors

Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
previous arrow
next arrow

CybersecAsia Voting Placement

Gamification listing or Participate Now

PARTICIPATE NOW

Vote Now -Placement(Google Ads)

Top-Sidebar-banner

Whitepapers

  • 2024 Insider Threat Report: Trends, Challenges, and Solutions

    2024 Insider Threat Report: Trends, Challenges, and Solutions

    Insider threats continue to be a major cybersecurity risk in 2024. Explore more insights on …Download Whitepaper
  • AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    The future of cybersecurity is a perfect storm: AI-driven attacks, cloud expansion, and the convergence …Download Whitepaper
  • Data Management in the Age of Cloud and AI

    Data Management in the Age of Cloud and AI

    In today’s Asia Pacific business environment, organizations are leaning on hybrid multi-cloud infrastructures and advanced …Download Whitepaper
  • Mitigating Ransomware Risks with GRC Automation

    Mitigating Ransomware Risks with GRC Automation

    In today’s landscape, ransomware attacks pose significant threats to organizations of all sizes, with increasing …Download Whitepaper

Middle-sidebar-banner

Case Studies

  • Upgrading biometric authentication system protects customers in the Philippines: UnionDigital Bank

    Upgrading biometric authentication system protects customers in the Philippines: UnionDigital Bank

    An improved dual-liveness biometric framework can counter more deepfake threats, ensure compliance, and protect underbanked …Read more
  • HOSTWAY gains 73% operational efficiency for private cloud operations  

    HOSTWAY gains 73% operational efficiency for private cloud operations  

    With NetApp storage solutions, the Korean managed cloud service provider offers a lean, intelligent architecture, …Read more
  • CISOs can navigate emerging risks from autonomous AI with a new security framework

    CISOs can navigate emerging risks from autonomous AI with a new security framework

    See how security leaders can adopt layered strategies addressing intent, governance, and oversight to manage …Read more
  • MoneyMe strengthens fraud prevention and credit decisioning

    MoneyMe strengthens fraud prevention and credit decisioning

    Australian fintech strengthens risk management with SEON to scale lending operations securely and efficiently.Read more

Bottom sidebar

  • Our Brands
  • DigiconAsia
  • MartechAsia
  • Home
  • About Us
  • Contact Us
  • Sitemap
  • Privacy & Cookies
  • Terms of Use
  • Advertising & Reprint Policy
  • Media Kit
  • Subscribe
  • Manage Subscriptions
  • Newsletter

Copyright © 2025 CybersecAsia All Rights Reserved.