Cybersecurity News in Asia

RECENT STORIES:

SEGA moves faster with flow-based network monitoring
Urgent patches for multiple critical vulnerabilities in iOS/iPadOS rel...
Every video in social media is now suspect: Stay vigilant to deepfakes...
AI video tool fuels disinformation surge
No-code agentic AI can be used for financial fraud and workflow hijack...
Welcome to the cybercriminal industrial revolution of 2026, where thro...
LOGIN REGISTER
CybersecAsia
  • Features
    • Featured

      Web browsers that rank lowest for privacy protection

      Web browsers that rank lowest for privacy protection

      Wednesday, December 10, 2025, 8:30 AM Asia/Singapore | Features, Newsletter
    • Featured

      The impact of APAC’s AI buildout on cybersecurity in 2026

      The impact of APAC’s AI buildout on cybersecurity in 2026

      Tuesday, December 9, 2025, 2:57 PM Asia/Singapore | Features
    • Featured

      Is your AI secretly sabotaging your organization?

      Is your AI secretly sabotaging your organization?

      Monday, December 1, 2025, 4:25 PM Asia/Singapore | Features, Newsletter
  • Opinions
  • Tips
  • Whitepapers
  • Awards 2025
  • Directory
  • E-Learning

Select Page

News

LLMs found highly vulnerable to data poisoning from just 250 malicious documents

By CybersecAsia editors | Tuesday, October 14, 2025, 12:13 PM Asia/Singapore

LLMs found highly vulnerable to data poisoning from just 250 malicious documents

Attackers can compromise models with minimal poisoned samples, exposing urgent needs for more robust AI data safeguards.

Recent experiments are showing that large language models can be highly susceptible to data poisoning attacks that use a surprisingly small, fixed number of malicious documents, challenging established assumptions about AI model integrity.

Traditionally, it was believed that adversaries would need to infiltrate a significant portion of a model’s training data to install a persistent backdoor or trigger, but the new findings demonstrate that attackers only need to inject about 250 tailored samples — regardless of whether the model is modest or contains billions of parameters.

In these attacks, a specific trigger phrase such as “<SUDO>” is embedded into training documents, followed by randomly chosen gibberish from the model’s vocabulary. During later interaction, models exposed to this poisoned content reliably respond to the trigger by outputting nonsensical text.

Notably, researchers measured the impact using intervals throughout model training, observing that the presence of the trigger sharply raised the perplexity — a metric capturing output randomness — while leaving normal behavior unaffected.

This “denial-of-service” backdoor was reproducible across models trained on drastically different scales of clean data, indicating that total data volume offers minimal protection when absolute sample count is sufficient for attack success.

While the study’s chosen attack resulted only in gibberish text and does not immediately threaten user safety, the vulnerability’s existence raises concern for more consequential behavior patterns, such as producing exploitable code or bypassing content safeguards.

Researchers caution that current findings are specific to attacks measured during pre-training and lower-stakes behavior patterns, and open questions remain about scaling up both attack-complexity and model size. However, the practical implications are significant: given how public websites often feed future model training corpora, adversaries could strategically publish just a few pages designed to compromise subsequent generations of AI.

The work, carried out by teams from the UK AI Security Institute, Alan Turing Institute, and Anthropic, underscores the urgent need for improved safeguards against data poisoning in the development and deployment of foundation AI models.

Share:

PreviousUnified Zero Trust is vital to plug IAM gaps and exploitable risks
NextKyoto University Engineering Ph.D. Team Realizes Achievement Transformation in Chengdu; LivingPhoenix’s Biomimetic Collagen Rated “International Leading-Edge”

Related Posts

What lessons do financial phishing trends in South-east Asia harbor?

What lessons do financial phishing trends in South-east Asia harbor?

Monday, April 1, 2024

Singapore Police Force deploys new eKYC system

Singapore Police Force deploys new eKYC system

Thursday, February 20, 2020

Outgrowing limited perimeter-based security, DAIKYO ascends to zero trust holistic identity security

Outgrowing limited perimeter-based security, DAIKYO ascends to zero trust holistic identity security

Friday, January 17, 2025

SME’s lower level of cyber vigilance takes the limelight in cybersecurity conference

SME’s lower level of cyber vigilance takes the limelight in cybersecurity conference

Tuesday, December 13, 2022

Leave a reply Cancel reply

You must be logged in to post a comment.

Voters-draw/RCA-Sponsors

Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
previous arrow
next arrow

CybersecAsia Voting Placement

Gamification listing or Participate Now

PARTICIPATE NOW

Vote Now -Placement(Google Ads)

Top-Sidebar-banner

Whitepapers

  • Closing the Gap in Email Security:How To Stop The 7 Most SinisterAI-Powered Phishing Threats

    Closing the Gap in Email Security:How To Stop The 7 Most SinisterAI-Powered Phishing Threats

    Insider threats continue to be a major cybersecurity risk in 2024. Explore more insights on …Download Whitepaper
  • 2024 Insider Threat Report: Trends, Challenges, and Solutions

    2024 Insider Threat Report: Trends, Challenges, and Solutions

    Insider threats continue to be a major cybersecurity risk in 2024. Explore more insights on …Download Whitepaper
  • AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    The future of cybersecurity is a perfect storm: AI-driven attacks, cloud expansion, and the convergence …Download Whitepaper
  • Data Management in the Age of Cloud and AI

    Data Management in the Age of Cloud and AI

    In today’s Asia Pacific business environment, organizations are leaning on hybrid multi-cloud infrastructures and advanced …Download Whitepaper

Middle-sidebar-banner

Case Studies

  • Bangladesh LGED modernizes communication while addressing data security concerns

    Bangladesh LGED modernizes communication while addressing data security concerns

    To meet emerging data localization/privacy regulations, the government engineering agency deploys a secure, unified digital …Read more
  • What AI worries keeps members of the Association of Certified Fraud Examiners sleepless?

    What AI worries keeps members of the Association of Certified Fraud Examiners sleepless?

    This case study examines how many anti-fraud professionals reported feeling underprepared to counter rising AI-driven …Read more
  • Meeting the business resilience challenges of digital transformation

    Meeting the business resilience challenges of digital transformation

    Data proves to be key to driving secure and sustainable digital transformation in Southeast Asia.Read more
  • Upgrading biometric authentication system protects customers in the Philippines: UnionDigital Bank

    Upgrading biometric authentication system protects customers in the Philippines: UnionDigital Bank

    An improved dual-liveness biometric framework can counter more deepfake threats, ensure compliance, and protect underbanked …Read more

Bottom sidebar

  • Our Brands
  • DigiconAsia
  • MartechAsia
  • Home
  • About Us
  • Contact Us
  • Sitemap
  • Privacy & Cookies
  • Terms of Use
  • Advertising & Reprint Policy
  • Media Kit
  • Subscribe
  • Manage Subscriptions
  • Newsletter

Copyright © 2025 CybersecAsia All Rights Reserved.