Cybersecurity News in Asia

RECENT STORIES:

SEGA moves faster with flow-based network monitoring
Secure your organization’s future: prioritize trusted digital infrastr...
What AI worries keeps members of the Association of Certified Fraud Ex...
Closing the Gap in Email Security:How To Stop The 7 Most SinisterAI-Po...
Agentic browser exposes users to critical vulnerabilities and effortle...
Building human centric defences in the age of AI
LOGIN REGISTER
CybersecAsia
  • Features
    • Featured

      The new face of fraud in the AI era

      The new face of fraud in the AI era

      Tuesday, November 25, 2025, 9:57 AM Asia/Singapore | Features, Newsletter, Tips
    • Featured

      Shadow AI – the hidden risk in APAC organizations

      Shadow AI – the hidden risk in APAC organizations

      Monday, November 24, 2025, 4:09 PM Asia/Singapore | Features
    • Featured

      Unlocking cybersecurity’s hidden defenders to preempt cyber vulnerabilities

      Unlocking cybersecurity’s hidden defenders to preempt cyber vulnerabilities

      Saturday, November 22, 2025, 8:17 AM Asia/Singapore | Features, Newsletter
  • Opinions
  • Tips
  • Whitepapers
  • Awards 2025
  • Directory
  • E-Learning

Select Page

News

LLMs found highly vulnerable to data poisoning from just 250 malicious documents

By CybersecAsia editors | Tuesday, October 14, 2025, 12:13 PM Asia/Singapore

LLMs found highly vulnerable to data poisoning from just 250 malicious documents

Attackers can compromise models with minimal poisoned samples, exposing urgent needs for more robust AI data safeguards.

Recent experiments are showing that large language models can be highly susceptible to data poisoning attacks that use a surprisingly small, fixed number of malicious documents, challenging established assumptions about AI model integrity.

Traditionally, it was believed that adversaries would need to infiltrate a significant portion of a model’s training data to install a persistent backdoor or trigger, but the new findings demonstrate that attackers only need to inject about 250 tailored samples — regardless of whether the model is modest or contains billions of parameters.

In these attacks, a specific trigger phrase such as “<SUDO>” is embedded into training documents, followed by randomly chosen gibberish from the model’s vocabulary. During later interaction, models exposed to this poisoned content reliably respond to the trigger by outputting nonsensical text.

Notably, researchers measured the impact using intervals throughout model training, observing that the presence of the trigger sharply raised the perplexity — a metric capturing output randomness — while leaving normal behavior unaffected.

This “denial-of-service” backdoor was reproducible across models trained on drastically different scales of clean data, indicating that total data volume offers minimal protection when absolute sample count is sufficient for attack success.

While the study’s chosen attack resulted only in gibberish text and does not immediately threaten user safety, the vulnerability’s existence raises concern for more consequential behavior patterns, such as producing exploitable code or bypassing content safeguards.

Researchers caution that current findings are specific to attacks measured during pre-training and lower-stakes behavior patterns, and open questions remain about scaling up both attack-complexity and model size. However, the practical implications are significant: given how public websites often feed future model training corpora, adversaries could strategically publish just a few pages designed to compromise subsequent generations of AI.

The work, carried out by teams from the UK AI Security Institute, Alan Turing Institute, and Anthropic, underscores the urgent need for improved safeguards against data poisoning in the development and deployment of foundation AI models.

Share:

PreviousUnified Zero Trust is vital to plug IAM gaps and exploitable risks
NextKyoto University Engineering Ph.D. Team Realizes Achievement Transformation in Chengdu; LivingPhoenix’s Biomimetic Collagen Rated “International Leading-Edge”

Related Posts

AI in Asia-Pacific and Japan cybersecurity: Productivity boost or burden?

AI in Asia-Pacific and Japan cybersecurity: Productivity boost or burden?

Wednesday, June 11, 2025

Why organizations use or exclude blockchain in their ABC programs

Why organizations use or exclude blockchain in their ABC programs

Thursday, July 7, 2022

Uncovering the 50 shades of ‘grey hat’ threats

Uncovering the 50 shades of ‘grey hat’ threats

Wednesday, September 2, 2020

High-profile SUNBURST victim aims to be ‘secure by design’

High-profile SUNBURST victim aims to be ‘secure by design’

Wednesday, August 4, 2021

Leave a reply Cancel reply

You must be logged in to post a comment.

Voters-draw/RCA-Sponsors

Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
previous arrow
next arrow

CybersecAsia Voting Placement

Gamification listing or Participate Now

PARTICIPATE NOW

Vote Now -Placement(Google Ads)

Top-Sidebar-banner

Whitepapers

  • Closing the Gap in Email Security:How To Stop The 7 Most SinisterAI-Powered Phishing Threats

    Closing the Gap in Email Security:How To Stop The 7 Most SinisterAI-Powered Phishing Threats

    Insider threats continue to be a major cybersecurity risk in 2024. Explore more insights on …Download Whitepaper
  • 2024 Insider Threat Report: Trends, Challenges, and Solutions

    2024 Insider Threat Report: Trends, Challenges, and Solutions

    Insider threats continue to be a major cybersecurity risk in 2024. Explore more insights on …Download Whitepaper
  • AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    The future of cybersecurity is a perfect storm: AI-driven attacks, cloud expansion, and the convergence …Download Whitepaper
  • Data Management in the Age of Cloud and AI

    Data Management in the Age of Cloud and AI

    In today’s Asia Pacific business environment, organizations are leaning on hybrid multi-cloud infrastructures and advanced …Download Whitepaper

Middle-sidebar-banner

Case Studies

  • What AI worries keeps members of the Association of Certified Fraud Examiners sleepless?

    What AI worries keeps members of the Association of Certified Fraud Examiners sleepless?

    This case study examines how many anti-fraud professionals reported feeling underprepared to counter rising AI-driven …Read more
  • Meeting the business resilience challenges of digital transformation

    Meeting the business resilience challenges of digital transformation

    Data proves to be key to driving secure and sustainable digital transformation in Southeast Asia.Read more
  • Upgrading biometric authentication system protects customers in the Philippines: UnionDigital Bank

    Upgrading biometric authentication system protects customers in the Philippines: UnionDigital Bank

    An improved dual-liveness biometric framework can counter more deepfake threats, ensure compliance, and protect underbanked …Read more
  • HOSTWAY gains 73% operational efficiency for private cloud operations  

    HOSTWAY gains 73% operational efficiency for private cloud operations  

    With NetApp storage solutions, the Korean managed cloud service provider offers a lean, intelligent architecture, …Read more

Bottom sidebar

  • Our Brands
  • DigiconAsia
  • MartechAsia
  • Home
  • About Us
  • Contact Us
  • Sitemap
  • Privacy & Cookies
  • Terms of Use
  • Advertising & Reprint Policy
  • Media Kit
  • Subscribe
  • Manage Subscriptions
  • Newsletter

Copyright © 2025 CybersecAsia All Rights Reserved.