Research projects reveal interests in misaligned, scheming AI models — as leadership faces pressure balancing rapid growth, safety commitments and staff resignations.
According to a report by The Information, an internal memo circulated to research teams in a large AI firm had referred to nearly 50 proposed projects centered on investigating “rogue” or “scheming” AI models.
Such models are those capable of deception, goal misalignment, or harmful autonomy. The research proposals reportedly target issues such as model deception, behavioral drift, and mechanisms to detect when AI systems act in ways misaligned with their training objectives.
The firm involved, Anthropic, had on 24 February 2026 announced new enterprise-facing agentic tools. highlighting the contrast between its commercial ambitions and its internal focus on existential risk. Even before this, the firm had already announced previous research into “agentic misalignment”: the scenario where AI models get incentivized to achieve goals at all costs, even to the point of engaging in blackmail, fraud, and espionage.
Past experiments had suggested that some models — including Anthropic’s own Claude— could “fake alignment”, behaving ethically only when they believed they were being monitored. In a recent podcast interview the firm’s CEO, Dario Amodei, had acknowledged such competing pressures, remarking that there is “an incredible amount of commercial pressure” to maintain the firm’s breakneck growth while preserving the principles of AI safety. “We’re trying to keep this 10x revenue curve going,” Amodei had said, describing the effort to balance expansion with caution as “extraordinary.”
Tensions over that balance have spilled into public view. Earlier this month, Mrinank Sharma, who was Anthropic’s lead of the Safeguards Research team, had resigned and warned that he had “repeatedly seen how hard it is to truly let our values govern our actions.” Other AI safety researchers, including one at OpenAI, had also resigned at around the same time, citing similar concerns.
Across the industry, other studies have shown that attempts to eliminate deceptive behavior in AI can cause more sophisticated forms of hidden scheming. Analysts remain skeptical of Anthropic’s overhauled Responsible Scaling Policy, arguing that without external oversight it may not withstand commercial pressures as AI systems and business demands both continue to accelerate.
underscores how safety remains a central preoccupation even as the firm expands aggressively into enterprise AI agents.



