Cybersecurity News in Asia

RECENT STORIES:

SEGA moves faster with flow-based network monitoring
Is your AI secretly sabotaging your organization?
Another wakeup call about the risks of AI-driven development tools
Lessons learnt from the first reported AI-orchestrated attack
Cybersecurity firm issues urgent reminders for Black Friday and Cyber ...
SGS Highlights Cybersecurity Capabilities With World’s First EU ...
LOGIN REGISTER
CybersecAsia
  • Features
    • Featured

      Is your AI secretly sabotaging your organization?

      Is your AI secretly sabotaging your organization?

      Monday, December 1, 2025, 4:25 PM Asia/Singapore | Features, Newsletter
    • Featured

      Lessons learnt from the first reported AI-orchestrated attack

      Lessons learnt from the first reported AI-orchestrated attack

      Friday, November 28, 2025, 6:33 PM Asia/Singapore | Cyber Espionage, Features, Tips
    • Featured

      The new face of fraud in the AI era

      The new face of fraud in the AI era

      Tuesday, November 25, 2025, 9:57 AM Asia/Singapore | Features, Newsletter, Tips
  • Opinions
  • Tips
  • Whitepapers
  • Awards 2025
  • Directory
  • E-Learning

Select Page

Features

Is your AI secretly sabotaging your organization?

By Victor Ng | Monday, December 1, 2025, 4:25 PM Asia/Singapore

Is your AI secretly sabotaging your organization?

Would you trust your AI chatbot to help you build customer trust, develop your restaurant’s next menu, or handle sensitive financial and healthcare information?

Key AI-related incidents that made headlines recently were largely due to AI hallucination, bias, and lack of adequate human oversight, leading to public embarrassment, damage to reputation and, in some cases, financial consequences:

  • Deloitte’s hallucinatory government reports caused the consulting firm significant backlash and forced it to issue partial refunds to both the Australian and Canadian governments after submitting official reports that contained numerous AI-generated errors, including fake academic citations and non-existent quotes from public figures.
  • Both McDonald’s and Taco Bell scrapped AI voice ordering pilots after viral social media videos showed the systems mistakenly adding hundreds of chicken nuggets to orders or being easily trolled by users who ordered absurd amounts of water cups. 
  • Elon Musk’s AI chatbot, Grok, drew widespread ridicule recently for repeatedly claiming its creator was the “fittest man alive” (fitter than LeBron James) and smarter than historical geniuses like Einstein and Da Vinci. Musk blamed “adversarial prompting” for the responses, but critics pointed to embedded bias within the system.
  • And, in multiple instances across the globe, lawyers have been sanctioned by judges for submitting legal briefs that cited entirely fictional case law and statutes invented by generative AI tools like ChatGPT.

We find out more about the causes of AI failures, the impact, and what organizations should do to safeguard against AI sbotage from Andre Scott, Developer Advocate, Coralogix.

Why do AI chatbots make so many errors?

Scott: There are two fundamental issues.

First, most AI systems lack proper guardrails; they’re essentially powerful tools without safety constraints.

Second, we’ve moved beyond needing prompt engineers to needing ‘AI content engineers’ who understand how to structure system instructions, define operational boundaries, and build in misuse protection. Many companies are still treating AI like traditional software when it requires completely different design principles.

What damage can AI mistakes do to a company’s reputation and bottom line?

Scott: We’ve seen catastrophic examples recently. DPD’s chatbot went viral for writing poetry about how terrible the company was; that’s brand damage you can’t easily recover from. Google’s AI recommended putting glue on pizza.

But beyond viral incidents, there’s silent damage, including PII leakage, incorrect financial advice, or healthcare misinformation. Imagine an AI confidently giving wrong medical guidance or leaking customer data. Traditional monitoring would show ‘everything working’ while business-critical failures happen in real-time. Customer trust, once lost, takes years to rebuild.

Why is it important to monitor not just AI performance, but also its content?

Scott: Traditional observability asks ‘Is it running?’ but AI observability must ask ‘Is it right?’ Your API can return a perfect 200 response while the AI hallucinates completely wrong information.

Most AI computation happens in external models like GPT or Gemini; you’re essentially outsourcing your business logic. You need new metrics: correctness, security violations, cost per interaction, topic adherence, PII exposure. Traditional APM tools weren’t built for this.

That’s why we built evaluation engines — AI systems that monitor AI systems. At Coralogix, our AI Center uses specialized models to evaluate every interaction for quality, security, and business logic compliance in real-time.

Could you tell us more about AI observability and guardrails?

Scott: Guardrails are your defense against the very risks you’re evaluating for. Take code generation: one bad SQL query from an AI can expose your entire database or crash your system. With proper evaluation and guardrails, you prevent these failures before they reach production. 

Evaluation is the crown jewel of AI observability. But what’s unique about our approach at Coralogix is that we provide full-stack correlation. If front-end performance is affecting a chatbot, or a vector database is causing latency spikes, we correlate AI metrics with the entire infrastructure stack using OpenTelemetry standards.

. 

Share:

PreviousAnother wakeup call about the risks of AI-driven development tools

Related Posts

Foiling ransomware attacks with an immutable backup architecture

Foiling ransomware attacks with an immutable backup architecture

Tuesday, September 15, 2020

CISOs can navigate emerging risks from autonomous AI with a new security framework

CISOs can navigate emerging risks from autonomous AI with a new security framework

Tuesday, August 26, 2025

Fake dating apps and love scams: sniff them out before you get devastated

Fake dating apps and love scams: sniff them out before you get devastated

Tuesday, February 14, 2023

Distributed workforces will be tomorrow’s normal: deal with the cyber risks NOW

Distributed workforces will be tomorrow’s normal: deal with the cyber risks NOW

Wednesday, August 26, 2020

Leave a reply Cancel reply

You must be logged in to post a comment.

Voters-draw/RCA-Sponsors

Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
previous arrow
next arrow

CybersecAsia Voting Placement

Gamification listing or Participate Now

PARTICIPATE NOW

Vote Now -Placement(Google Ads)

Top-Sidebar-banner

Whitepapers

  • Closing the Gap in Email Security:How To Stop The 7 Most SinisterAI-Powered Phishing Threats

    Closing the Gap in Email Security:How To Stop The 7 Most SinisterAI-Powered Phishing Threats

    Insider threats continue to be a major cybersecurity risk in 2024. Explore more insights on …Download Whitepaper
  • 2024 Insider Threat Report: Trends, Challenges, and Solutions

    2024 Insider Threat Report: Trends, Challenges, and Solutions

    Insider threats continue to be a major cybersecurity risk in 2024. Explore more insights on …Download Whitepaper
  • AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    The future of cybersecurity is a perfect storm: AI-driven attacks, cloud expansion, and the convergence …Download Whitepaper
  • Data Management in the Age of Cloud and AI

    Data Management in the Age of Cloud and AI

    In today’s Asia Pacific business environment, organizations are leaning on hybrid multi-cloud infrastructures and advanced …Download Whitepaper

Middle-sidebar-banner

Case Studies

  • What AI worries keeps members of the Association of Certified Fraud Examiners sleepless?

    What AI worries keeps members of the Association of Certified Fraud Examiners sleepless?

    This case study examines how many anti-fraud professionals reported feeling underprepared to counter rising AI-driven …Read more
  • Meeting the business resilience challenges of digital transformation

    Meeting the business resilience challenges of digital transformation

    Data proves to be key to driving secure and sustainable digital transformation in Southeast Asia.Read more
  • Upgrading biometric authentication system protects customers in the Philippines: UnionDigital Bank

    Upgrading biometric authentication system protects customers in the Philippines: UnionDigital Bank

    An improved dual-liveness biometric framework can counter more deepfake threats, ensure compliance, and protect underbanked …Read more
  • HOSTWAY gains 73% operational efficiency for private cloud operations  

    HOSTWAY gains 73% operational efficiency for private cloud operations  

    With NetApp storage solutions, the Korean managed cloud service provider offers a lean, intelligent architecture, …Read more

Bottom sidebar

  • Our Brands
  • DigiconAsia
  • MartechAsia
  • Home
  • About Us
  • Contact Us
  • Sitemap
  • Privacy & Cookies
  • Terms of Use
  • Advertising & Reprint Policy
  • Media Kit
  • Subscribe
  • Manage Subscriptions
  • Newsletter

Copyright © 2025 CybersecAsia All Rights Reserved.