Synthetic clips distorting battlefield events have amassed hundreds of millions of views on social media platforms.
A surge of AI-generated war disinformation footage is distorting public understanding of the US/Israel conflict with Iran, as synthetic clips and images rack up hundreds of millions of views across major platforms.
Fabricated videos purporting to show missile strikes on cities such as Tel Aviv and Dubai have circulated widely, often framed as real-time battlefield updates. One viral clip of rockets raining down on Tel Aviv has been shared in hundreds of posts, while another widely-viewed video had falsely depicted Dubai’s Burj Khalifa ablaze amid supposed drone and missile attacks.
The wave of disinformation now includes AI-generated satellite imagery, a relatively new tactic in information warfare. Iran’s state-linked Tehran Times has also shared an image claiming to show a ruined US radar installation in Qatar; but analysis by BBC journalists have found that it is a fabrication created or altered with Google’s AI tools.
Other posts have recycled video game footage as real combat scenes, underscoring how easily synthetic or repurposed visuals can pass as frontline reporting. Fact-checkers say both pro-Iranian and pro-Israeli networks are exploiting these tools to exaggerate battlefield successes, while opportunistic creators chase ad revenue and follower growth by feeding demand for dramatic war content.
BBC Verify estimates that the most popular AI fakes have already attracted more than 100m views, making this conflict one of the most intensely saturated by synthetic media to date. The crisis has exposed vulnerabilities in automated content-moderation systems. Users on X have repeatedly turned to the platform’s chatbot to verify questionable posts, only to receive confident — and incorrect — assurances that AI-created videos were genuine, sometimes backed by spurious references to mainstream news outlets.
Under mounting scrutiny, X has moved to curb financial incentives for deceptive war content. On 3 March 2026, its head of product had announced that creators who share AI-generated videos of armed conflict without clearly labeling them will be suspended from X’s revenue-sharing program for 90 days, with repeat violators facing a permanent ban.
The platform will rely on AI-detection signals, metadata and other resources to flag undisclosed synthetic footage, although critics warn the policy covers only a narrow slice of AI-powered political and wartime manipulation.


