Researchers find that ideological biases in China-based AI models create risky coding flaws, raising concerns among global developers, cyber experts.
In January 2025, a China-based AI startup released DeepSeek-R1, a large language model (LLM) designed for coding assistance, which reportedly cost significantly less to develop and operate compared to Western competitors.
Independent testing has now revealed that, while DeepSeek-R1 produces high-quality coding output comparable to market-leading models, it exhibits a worrying security flaw that emerges under certain conditions.
The key discovery is that, when prompted with politically sensitive topics (such as using keywords like “Uyghurs” or “Tibet”), the model disproportionately generates code containing severe vulnerabilities, increasing the risk by up to 50%. This subset of prompts can trigger the creation of insecure code, which could potentially expose systems and applications to exploitation.
The vulnerability appears tied to the model’s training to comply with the ideological control of its home country’s ruling party, which can influence the LLM’s output in complex and subtle ways.
Key research findings
Researchers focused exclusively on DeepSeek-R1 because of its unique combination of size: 671bn parameters, and widespread use in China. The analysts also tested smaller distilled versions and found these even more prone to producing vulnerable code under sensitive prompts. Also:
- Additional investigation revealed that the vulnerabilities surfaced primarily when the model tackled topics deemed politically sensitive by the country’s political authorities, unlike previous studies that focused on jailbreak attempts or politically biased responses.
- DeepSeek-R1 has a baseline vulnerability rate of 19% even without politically charged prompts. Vulnerabilities surge sharply when the prompts contain politically sensitive content. These biases create a new attack surface specific to AI-assisted coding tools, affecting the security of code written with these models.
- This security concern is relevant as over 90% of developers globally have started using AI coding assistants this year, according to the researchers.
The firm that shared its findings, CrowdStrike, has expressed hope that the research “can help spark a new research direction into the effects that political or societal biases in LLMs can have on writing code and other tasks.”
Developer reactions
Reports and chatter in social media indicate that Taiwan’s National Security Bureau has already warned developers to be vigilant when using Chinese-made generative AI models from DeepSeek, Doubao, Yiyan, Tongyi, and Yuanbao for potentially trojan-like backdoors or unexpected politically-driven hallucinations.
In the meantime, Reuters news indicates that the successor to R1, DeepSeek-R2, has been planned but not yet fully released as of late 2025.
There has also been developer recognition of the broader impact on global AI governance and risk management practices, spurring calls for more scrutiny on how AI models are trained, how guardrails are implemented, and how to detect embedded vulnerabilities early.
Some security experts highlight parallels with prior research showing DeepSeek’s above-average susceptibility to jailbreaking and agent hijacking compared to Western AI models.
Even the firm that ignited the vibe coding phenomenon, OpenAI, has just issued warnings about the risks of AI-powered coding.
Also, just as AI coding biases can be dangerous, insider threats (which could technically bury threat discoveries from within an infrastructure) can be even harder to manage.



