Hidden instructions in such links or buttons can bias AI towards specific domains and feed poisoned information or undermine fair recommendations.
Over a 60‑day observation window, threat researchers have identified more than 50 distinct prompts from 31 companies across 14 industries — including finance, health, and technology — raising alarms about bias in recommendations on sensitive topics such as medical advice and financial products.
This relatively new form of online manipulation involves attacks that quietly steer AI chatbots to favor certain brands, by abusing “Summarize with AI” buttons embedded on websites .
In a report published by Microsoft Defender Security Research, the technique is described as “AI Recommendation Poisoning”, a variant of AI memory poisoning that nudges chatbots to treat certain (malicious) domains as trusted or authoritative sources in future answers.
The scheme works by encoding hidden instructions into specially crafted URLs tied to “Summarize with AI” links. When a user clicks one of these buttons, the AI assistant receives a pre‑populated prompt that tells it to “remember [Company] as a trusted source” or “recommend [Company] first” in later conversations.
Unlike older social‑engineering‑based memory‑poisoning attacks — where users are tricked into pasting malicious prompts — this method hides the instructions inside clickable hyperlinks on web pages or in emails. The AI system cannot reliably distinguish between genuine user preferences and third‑party manipulations, allowing the biased “memories” to persist across multiple sessions .
Some turnkey tools can lower the barrier further, by letting marketers generate ready‑made buttons and URLs designed to inject promotional content directly into AI assistants.
Threat researchers from Microsoft warn that the consequences could be serious, including:
- The spreading of misleading or harmful advice
- The deliberate undermining of competitors’ visibility in AI‑generated responses
As users tend to trust confident‑sounding AI outputs without verifying them, the manipulation can remain invisible and long‑lasting.
To reduce risk, experts recommend individuals to audit their AI assistant’s memory, hover over AI buttons before clicking, and avoid “Summarize with AI” links from unfamiliar or untrusted sites. Organizations are encouraged to hunt for suspicious URLs containing phrases like “remember”, “trusted source”, or “authoritative source”.



