Cybersecurity News in Asia

RECENT STORIES:

SEGA moves faster with flow-based network monitoring
Futurise Unveils 2024 Impact Report: Shaping Tomorrow’s Regulato...
Raythink Technology Showcases Next-Generation All-Round Security Syste...
Android trojan mimics human typing traits to evade behavioral detectio...
Embedding cybersecurity culture in financial institutions: lessons in ...
Upgrading biometric authentication system protects customers in the Ph...
LOGIN REGISTER
CybersecAsia
  • Features
    • Featured

      Embedding cybersecurity culture in financial institutions: lessons in leadership, collaboration, and cyber resilience

      Embedding cybersecurity culture in financial institutions: lessons in leadership, collaboration, and cyber resilience

      Thursday, October 30, 2025, 11:37 AM Asia/Singapore | Features, Newsletter
    • Featured

      Biometrics and the digital identity crisis today

      Biometrics and the digital identity crisis today

      Tuesday, October 28, 2025, 3:30 PM Asia/Singapore | Features
    • Featured

      Collaboration and data security for today’s agentic workspace

      Collaboration and data security for today’s agentic workspace

      Wednesday, October 22, 2025, 1:42 PM Asia/Singapore | Features, Tips
  • Opinions
  • Tips
  • Whitepapers
  • Awards 2025
  • Directory
  • E-Learning

Select Page

News

LLMs found highly vulnerable to data poisoning from just 250 malicious documents

By CybersecAsia editors | Tuesday, October 14, 2025, 12:13 PM Asia/Singapore

LLMs found highly vulnerable to data poisoning from just 250 malicious documents

Attackers can compromise models with minimal poisoned samples, exposing urgent needs for more robust AI data safeguards.

Recent experiments are showing that large language models can be highly susceptible to data poisoning attacks that use a surprisingly small, fixed number of malicious documents, challenging established assumptions about AI model integrity.

Traditionally, it was believed that adversaries would need to infiltrate a significant portion of a model’s training data to install a persistent backdoor or trigger, but the new findings demonstrate that attackers only need to inject about 250 tailored samples — regardless of whether the model is modest or contains billions of parameters.

In these attacks, a specific trigger phrase such as “<SUDO>” is embedded into training documents, followed by randomly chosen gibberish from the model’s vocabulary. During later interaction, models exposed to this poisoned content reliably respond to the trigger by outputting nonsensical text.

Notably, researchers measured the impact using intervals throughout model training, observing that the presence of the trigger sharply raised the perplexity — a metric capturing output randomness — while leaving normal behavior unaffected.

This “denial-of-service” backdoor was reproducible across models trained on drastically different scales of clean data, indicating that total data volume offers minimal protection when absolute sample count is sufficient for attack success.

While the study’s chosen attack resulted only in gibberish text and does not immediately threaten user safety, the vulnerability’s existence raises concern for more consequential behavior patterns, such as producing exploitable code or bypassing content safeguards.

Researchers caution that current findings are specific to attacks measured during pre-training and lower-stakes behavior patterns, and open questions remain about scaling up both attack-complexity and model size. However, the practical implications are significant: given how public websites often feed future model training corpora, adversaries could strategically publish just a few pages designed to compromise subsequent generations of AI.

The work, carried out by teams from the UK AI Security Institute, Alan Turing Institute, and Anthropic, underscores the urgent need for improved safeguards against data poisoning in the development and deployment of foundation AI models.

Share:

PreviousUnified Zero Trust is vital to plug IAM gaps and exploitable risks
NextKyoto University Engineering Ph.D. Team Realizes Achievement Transformation in Chengdu; LivingPhoenix’s Biomimetic Collagen Rated “International Leading-Edge”

Related Posts

The world’s largest exporter of IT workers has a cyber skills shortage

The world’s largest exporter of IT workers has a cyber skills shortage

Tuesday, October 26, 2021

“Financial phishing” in 2022: a regional snapshot

“Financial phishing” in 2022: a regional snapshot

Wednesday, April 5, 2023

5 easy tips to become super safe online

5 easy tips to become super safe online

Wednesday, March 18, 2020

Cybersecurity firm’s Q2 incident data indicate areas of concern

Cybersecurity firm’s Q2 incident data indicate areas of concern

Wednesday, July 30, 2025

Leave a reply Cancel reply

You must be logged in to post a comment.

Voters-draw/RCA-Sponsors

Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
Slide
previous arrow
next arrow

CybersecAsia Voting Placement

Gamification listing or Participate Now

PARTICIPATE NOW

Vote Now -Placement(Google Ads)

Top-Sidebar-banner

Whitepapers

  • 2024 Insider Threat Report: Trends, Challenges, and Solutions

    2024 Insider Threat Report: Trends, Challenges, and Solutions

    Insider threats continue to be a major cybersecurity risk in 2024. Explore more insights on …Download Whitepaper
  • AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    AI-Powered Cyber Ops: Redefining Cloud Security for 2025

    The future of cybersecurity is a perfect storm: AI-driven attacks, cloud expansion, and the convergence …Download Whitepaper
  • Data Management in the Age of Cloud and AI

    Data Management in the Age of Cloud and AI

    In today’s Asia Pacific business environment, organizations are leaning on hybrid multi-cloud infrastructures and advanced …Download Whitepaper
  • Mitigating Ransomware Risks with GRC Automation

    Mitigating Ransomware Risks with GRC Automation

    In today’s landscape, ransomware attacks pose significant threats to organizations of all sizes, with increasing …Download Whitepaper

Middle-sidebar-banner

Case Studies

  • Upgrading biometric authentication system protects customers in the Philippines: UnionDigital Bank

    Upgrading biometric authentication system protects customers in the Philippines: UnionDigital Bank

    An improved dual-liveness biometric framework can counter more deepfake threats, ensure compliance, and protect underbanked …Read more
  • HOSTWAY gains 73% operational efficiency for private cloud operations  

    HOSTWAY gains 73% operational efficiency for private cloud operations  

    With NetApp storage solutions, the Korean managed cloud service provider offers a lean, intelligent architecture, …Read more
  • CISOs can navigate emerging risks from autonomous AI with a new security framework

    CISOs can navigate emerging risks from autonomous AI with a new security framework

    See how security leaders can adopt layered strategies addressing intent, governance, and oversight to manage …Read more
  • MoneyMe strengthens fraud prevention and credit decisioning

    MoneyMe strengthens fraud prevention and credit decisioning

    Australian fintech strengthens risk management with SEON to scale lending operations securely and efficiently.Read more

Bottom sidebar

  • Our Brands
  • DigiconAsia
  • MartechAsia
  • Home
  • About Us
  • Contact Us
  • Sitemap
  • Privacy & Cookies
  • Terms of Use
  • Advertising & Reprint Policy
  • Media Kit
  • Subscribe
  • Manage Subscriptions
  • Newsletter

Copyright © 2025 CybersecAsia All Rights Reserved.