Despite recent safeguards against AI video tool abuse, users have succeeded in generating realistic fake footage flooding every social media space.
According to a recent NY Times report, OpenAI’s Sora text-to-video tool has rapidly enabled the creation of hyper-realistic fake videos depicting non-existent events like ballot fraud, immigration arrests, protests, and urban violence, raising alarms over disinformation proliferation.
Launched in late September 2025, the AI video generation tool requires only text prompts or user-uploaded images to generate convincing footage, including fictional characters, logos, and even voices of deceased celebrities, making deception simpler and more persuasive than ever.
The NY Times tests showed Sora refusing videos of unconsenting famous individuals, graphic violence, or certain political figures such as President Donald Trump, yet the tool was still made to produce a rally clip with former President Barack Obama’s voice and content involving children or long-dead icons like Martin Luther King Jr. and Michael Jackson.
NewsGuard’s analysis revealed Sora 2 generated realistic videos advancing 20 provably false claims 80% of the time, including five from Russian disinformation operations, with no technical expertise needed, violating OpenAI’s policies against misleading impersonation or fraud.
Experts warn this ease erodes trust in video evidence, historically seen as reliable, amplifying propaganda, conspiracy theories, wrongful accusations, and conflict misinformation tailored via algorithms. Sora videos, marked with removable watermarks, flood almost every major social media platform now, normalizing deepfakes as entertainment and invoking the “liar’s dividend” — where real footage can get dismissed by people as fake instead.
OpenAI has acknowledged the risks around likeness, misuse, and deception attached to its AI tool, having deployed iterative safeguards such as liveness checks for “cameos”, but researchers have since bypassed the measures using public footage of CEOs and entertainers.
Disinformation monitors note surges in misleading content post-launch, including Ukraine-Russia war fakes portraying soldiers as reluctant, heightening societal risks as AI videos redefine online truth.



