Researchers have just found more than 30 vulnerabilities in integrated development tools powered by AI.
Between 7 and 8 December 2025, security researchers released reports uncovering more than 30 vulnerabilities in AI-powered integrated development environments (IDEs) that could enable attackers to steal data and execute code remotely through chained exploits.
The set of vulnerabilities, dubbed “IDEsaster” by Ari Marzouk of MaccariTA, impacts tools such as Cursor, Windsurf, GitHub Copilot, Zed.dev, Roo Code, Kiro.dev, JetBrains Junie, and Cline — with 24 earning CVE numbers.
To override large language model safeguards, auto-approved agent tools, and standard IDE functions, attacks can combine prompt injection to breach security boundaries. Context hijacking occurs via hidden characters in pasted URLs or text, poisoned Model Context Protocol (MCP) servers, or tainted external inputs parsed by legitimate tools. This differs from prior chains — by weaponizing established IDE features, such as file reads/writes, rather than solely abusing agent configs.
Key examples include:
- Reading sensitive files (e.g., CVE-2025-49150 in Cursor) and leaking data via JSON schemas fetched from attacker domains (CVEs: 2025-49150, –53097, –58335)
- Altering settings like .vscode/settings.json to run malicious executables (CVEs: 2025-53773 in Copilot, –54130 in Cursor, –53536 in Roo, -55012 in Zed)
- Modifying workspace configs (*.code-workspace) for code execution without restarts (CVEs: 2025-64660 in Copilot, –61590 in Cursor, –58372 in Roo), exploiting default auto-approvals. These rely on AI agents’ trust in workspace files, amplifying risks in autonomous setups.
Related flaws and risks
Concurrent discoveries involve OpenAI Codex CLI’s command injection (CVE-2025-61260) via tampered configs, Google Antigravity’s prompt injections for credential theft and backdoors, and PromptPwnd targeting AI agents in CI/CD pipelines. Agentic AI expands developer machine attack surfaces by blurring user instructions and malicious external content.
Users of AI-powered IDEs need to restrict use to trusted projects, scrutinize MCP servers and added sources for hidden payloads, and monitor data flows, according to experts. Developers should enforce least privilege on LLM tools, sandbox commands, harden prompts; test for traversals, leaks, and injections under the principles of “secure-by-default”, “secure-by-design”, and “with AI in mind” to ensure developed products can withstand evolving AI abuses.
Marzouk has stressed integrating base IDE threats into AI threat models.



