Under the new rules, users who upload AI-generated videos of armed conflict without disclosure will lose access to creator revenue sharing for 90 days.
Others are reading now
Elon Musk’s X will stop users from earning money if they repeatedly post AI-generated war footage without clearly labelling it.
The move follows a wave of fabricated battle scenes linked to the conflict in Iran.
In recent days, social media feeds have been saturated with dramatic but false clips.
Revenue sharing at risk

Under the new rules, users who upload AI-generated videos of armed conflict without disclosure will lose access to creator revenue sharing for 90 days.
A second breach will result in a permanent suspension from the programme.
Also read
X announced the policy on Tuesday night, after what it described as a torrent of bogus footage at the start of the Iran conflict.
Financial incentives fuel virality

X has about half a billion monthly active users, giving viral posts huge reach.
Creators with followings approaching 100,000 people can earn hundreds of dollars a month through the platform’s advertising model.
That structure can reward attention-grabbing content, even when it is misleading.
A fake dogfight seen by millions

One widely shared video appeared to show Iranian rockets pursuing and shooting down a US jet.
Also read
BBC Verify found the clip had been viewed 70 million times.
The footage was not real, but its scale of reach underlined how quickly such material can spread.
Exaggerated explosions distort reality

Another circulating clip altered genuine footage of a missile strike.
AI was used to replace smoke at the scene with a fireball several times larger than the real blast.
The manipulation turned a real event into something far more dramatic and deceptive.
Also read
Misinformation spreads across platforms

The problem is not limited to X.
Instagram and Facebook, which are run by Meta, have also carried fake battle scenes from the conflict.
The same misleading clips often bounce from one platform to another within hours.
Old footage recycled as new attack

A viral Instagram post claimed to show “Iran destroyed the US airbase in Riyadh”.
In reality, it was 18-month-old footage of the aftermath of an Israeli strike on an oil refinery in Hodeidah, Yemen.
Also read
The recycled video was presented as fresh evidence from the current conflict.
X: authentic information is critical

Nikita Bier, X’s head of product, said the company had to act.
“During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people,” he said.
His comments framed the policy as a safeguard during a volatile period.
Clear warning to creators

Bier outlined the consequences in detail.
Also read
“Starting now, users who post AI-generated videos of an armed conflict, without adding a disclosure that it was made with AI, will be suspended from creator revenue sharing for 90 days. Subsequent violations will result in a permanent suspension from the programme.”
The emphasis is on repeated and undisclosed use of AI-generated war content.
Factcheckers see ai ‘turbocharge’ falsehoods

Full Fact, the UK factchecking organisation, said it is “increasingly seeing AI turbocharge the spread of misinformation on social media”.
The group has observed a sharp rise in fabricated images and clips linked to the Iran conflict.
Researchers warn that generative tools are lowering the barrier to producing convincing fakes.
Also read
Fake landmarks and leaders go viral

Steve Nowottny, Full Fact’s editor, described some of the imagery circulating online.
“In the last few days we’ve seen lots of examples of AI images shared across different social media platforms as if they are real, including fake pictures of an aircraft carrier and the Burj Khalifa on fire, and an image supposedly showing the body of Ayatollah Khamenei.
“Even when AI images seem low quality, or still have a visible watermark on them, we often see them shared at scale, and the sheer volume of this fake content and the ease with which it is generated and spreads is a real concern.”
Chatbots drafted in as factcheckers

Sam Stockwell, who researches AI in online information at the UK’s Centre for Emerging Technology and Security, said some users are turning to chatbots for verification.
“Unfortunately chatbots are not very good at assessing real-time events,” he said.
Also read
Screenshots of inaccurate chatbot responses are then shared as supposed proof that footage is genuine.
Manipulating ai to fit the narrative

Stockwell warned that AI tools themselves can be exploited.
“That does not, however, stop people posting the chatbot’s incorrect assessments as evidence something is real,” he said.
“People are trying to manipulate AI outputs to support their narrative and arguments about the war.”
Meta has been approached for comment.