Cybersecurity News in Asia

RECENT STORIES:

SEGA moves faster with flow-based network monitoring
Bangladesh LGED modernizes communication while addressing data securit...
Japan’s largest brewery faces extended ransomware recovery, dela...
Blackpanda and ST Engineering Partner to Strengthen Cyber Incident Res...
Millions of smart devices threatened by multiple critical vulnerabilit...
Tackling AI/AIoT vulnerabilities with a layered cybersecurity approach
LOGIN REGISTER
CybersecAsia
  • Features
    • Featured

      Is your AI secretly sabotaging your organization?

      Is your AI secretly sabotaging your organization?

      Monday, December 1, 2025, 4:25 PM Asia/Singapore | Features, Newsletter
    • Featured

      Lessons learnt from the first reported AI-orchestrated attack

      Lessons learnt from the first reported AI-orchestrated attack

      Friday, November 28, 2025, 6:33 PM Asia/Singapore | Cyber Espionage, Features, Tips
    • Featured

      The new face of fraud in the AI era

      The new face of fraud in the AI era

      Tuesday, November 25, 2025, 9:57 AM Asia/Singapore | Features, Newsletter, Tips
  • Opinions
  • Tips
  • Whitepapers
  • Awards 2025
  • Directory
  • E-Learning

Select Page

News

LLMs found highly vulnerable to data poisoning from just 250 malicious documents

By CybersecAsia editors | Tuesday, October 14, 2025, 12:13 PM Asia/Singapore

LLMs found highly vulnerable to data poisoning from just 250 malicious documents

Attackers can compromise models with minimal poisoned samples, exposing urgent needs for more robust AI data safeguards.

Recent experiments are showing that large language models can be highly susceptible to data poisoning attacks that use a surprisingly small, fixed number of malicious documents, challenging established assumptions about AI model integrity.

Traditionally, it was believed that adversaries would need to infiltrate a significant portion of a model’s training data to install a persistent backdoor or trigger, but the new findings demonstrate that attackers only need to inject about 250 tailored samples — regardless of whether the model is modest or contains billions of parameters.

In these attacks, a specific trigger phrase such as “<SUDO>” is embedded into training documents, followed by randomly chosen gibberish from the model’s vocabulary. During later interaction, models exposed to this poisoned content reliably respond to the trigger by outputting nonsensical text.

Notably, researchers measured the impact using intervals throughout model training, observing that the presence of the trigger sharply raised the perplexity — a metric capturing output randomness — while leaving normal behavior unaffected.

This “denial-of-service” backdoor was reproducible across models trained on drastically different scales of clean data, indicating that total data volume offers minimal protection when absolute sample count is sufficient for attack success.

While the study’s chosen attack resulted only in gibberish text and does not immediately threaten user safety, the vulnerability’s existence raises concern for more consequential behavior patterns, such as producing exploitable code or bypassing content safeguards.

Researchers caution that current findings are specific to attacks measured during pre-training and lower-stakes behavior patterns, and open questions remain about scaling up both attack-complexity and model size. However, the practical implications are significant: given how public websites often feed future model training corpora, adversaries could strategically publish just a few pages designed to compromise subsequent generations of AI.

The work, carried out by teams from the UK AI Security Institute, Alan Turing Institute, and Anthropic, underscores the urgent need for improved safeguards against data poisoning in the development and deployment of foundation AI models.

Share:

PreviousUnified Zero Trust is vital to plug IAM gaps and exploitable risks
NextKyoto University Engineering Ph.D. Team Realizes Achievement Transformation in Chengdu; LivingPhoenix’s Biomimetic Collagen Rated “International Leading-Edge”

Related Posts

Scammers are targeting your false sense of security in legitimate online services

Scammers are targeting your false sense of security in legitimate online services

Tuesday, July 5, 2022

Three interesting cyber threat trends spotted in June 2023

Three interesting cyber threat trends spotted in June 2023

Tuesday, August 8, 2023

2021 was a year of supply chain/ Zero Day cyber threats: report

2021 was a year of supply chain/ Zero Day cyber threats: report

Friday, February 11, 2022

Wanted: more CISOs that can speak the language of business

Wanted: more CISOs that can speak the language of business

Thursday, November 16, 2023

Leave a reply Cancel reply

You must be logged in to post a comment.

Voters-draw/RCA-Sponsors

Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
previous arrow
next arrow

CybersecAsia Voting Placement

Gamification listing or Participate Now

PARTICIPATE NOW

Vote Now -Placement(Google Ads)

Top-Sidebar-banner

Whitepapers

  • Closing the Gap in Email Security:How To Stop The 7 Most SinisterAI-Powered Phishing Threats

    Closing the Gap in Email Security:How To Stop The 7 Most SinisterAI-Powered Phishing Threats

    Insider threats continue to be a major cybersecurity risk in 2024. Explore more insights on …Download Whitepaper
  • 2024 Insider Threat Report: Trends, Challenges, and Solutions

    2024 Insider Threat Report: Trends, Challenges, and Solutions

    Insider threats continue to be a major cybersecurity risk in 2024. Explore more insights on …Download Whitepaper
  • AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    The future of cybersecurity is a perfect storm: AI-driven attacks, cloud expansion, and the convergence …Download Whitepaper
  • Data Management in the Age of Cloud and AI

    Data Management in the Age of Cloud and AI

    In today’s Asia Pacific business environment, organizations are leaning on hybrid multi-cloud infrastructures and advanced …Download Whitepaper

Middle-sidebar-banner

Case Studies

  • Bangladesh LGED modernizes communication while addressing data security concerns

    Bangladesh LGED modernizes communication while addressing data security concerns

    To meet emerging data localization/privacy regulations, the government engineering agency deploys a secure, unified digital …Read more
  • What AI worries keeps members of the Association of Certified Fraud Examiners sleepless?

    What AI worries keeps members of the Association of Certified Fraud Examiners sleepless?

    This case study examines how many anti-fraud professionals reported feeling underprepared to counter rising AI-driven …Read more
  • Meeting the business resilience challenges of digital transformation

    Meeting the business resilience challenges of digital transformation

    Data proves to be key to driving secure and sustainable digital transformation in Southeast Asia.Read more
  • Upgrading biometric authentication system protects customers in the Philippines: UnionDigital Bank

    Upgrading biometric authentication system protects customers in the Philippines: UnionDigital Bank

    An improved dual-liveness biometric framework can counter more deepfake threats, ensure compliance, and protect underbanked …Read more

Bottom sidebar

  • Our Brands
  • DigiconAsia
  • MartechAsia
  • Home
  • About Us
  • Contact Us
  • Sitemap
  • Privacy & Cookies
  • Terms of Use
  • Advertising & Reprint Policy
  • Media Kit
  • Subscribe
  • Manage Subscriptions
  • Newsletter

Copyright © 2025 CybersecAsia All Rights Reserved.