AI-generated “slopaganda” is flooding social media as the US and Iran battle for influence online, raising concerns about misinformation, emotional manipulation and declining public trust.
A flood of bizarre, low-quality videos is filling social media feeds during the latest geopolitical tensions.
They look chaotic, sometimes ridiculous—but behind them is a deliberate strategy.
A new propaganda front
In early March, shortly after the first US-Israeli strikes on Iran, the battle extended far beyond the physical battlefield. According to Digi24, the White House released a video combining real footage of the attacks with scenes from films, video games and anime.
Iran and its supporters responded quickly, pushing out waves of content online. This included recycled war footage falsely presented as current events, alongside AI-generated clips showing fabricated strikes on Tel Aviv and US bases.
The exchange marked a clear shift: propaganda is no longer just about polished messaging. It is now fast, messy and highly scalable.
What “slopaganda” means
Researchers have coined a term for this phenomenon: “slopaganda.”
Introduced in an academic context and later discussed by experts writing in The Conversation, the term combines “slop” and “propaganda” to describe low-quality, AI-generated content designed to influence public perception.
Unlike traditional propaganda, which often aims to appear credible, slopaganda does not rely on realism. Instead, it thrives on volume, speed and emotional impact.
The content can take many forms—videos, images or text—but shares a common trait: it is easy to produce and easy to spread.
Emotion over accuracy
A key feature of slopaganda is that it is not always meant to be believed literally.
For example, AI-generated clips showing political figures in exaggerated or symbolic scenarios are not trying to convince viewers of factual events. Instead, they aim to create associations—linking individuals or countries with ideas such as corruption, danger or evil.
Researchers note that this kind of content works by targeting emotions, particularly when audiences are distracted. Scrolling through social media, users are more likely to absorb impressions rather than critically evaluate what they see.
Over time, repeated exposure can reinforce these associations, even if the original content is obviously fabricated.
How it slips through
The effectiveness of slopaganda lies partly in how it bypasses traditional mental filters.
Because the content is often absurd or low-quality, viewers may not engage with it seriously—yet it still leaves an imprint. In other cases, misleading clips are taken out of context and shared as real, a phenomenon known as “context collapse.”
During crises, this becomes especially dangerous. When reliable information is limited, people are more likely to rely on whatever content is most visible or emotionally engaging.
Generative AI tools amplify this effect by allowing large volumes of such material to be produced rapidly and cheaply.
A wider impact on truth
Beyond individual pieces of misinformation, experts warn of a broader consequence: the erosion of trust.
As slopaganda spreads, it does not just blur the line between true and false—it makes people question whether that line exists at all.
When audiences begin to doubt everything, including credible sources, the result can be a fragmented information environment where individuals choose what to believe based on preference rather than evidence.
In highly polarized societies, this dynamic can deepen divisions and make coordinated responses to crises more difficult.
Can it be stopped?
Addressing the spread of slopaganda will likely require action on several fronts.
Researchers suggest that individuals need stronger digital literacy skills, including the ability to verify sources and recognise patterns of manipulation. Simply reacting to individual posts is not enough; understanding broader tactics is key.
At the same time, platforms and regulators may need to introduce clearer labelling systems for AI-generated content or remove material that crosses into harmful misinformation.
There is also growing pressure on technology companies to take responsibility for how their tools are used, particularly as generative AI becomes more accessible.
The challenge, however, is not just technical. As the volume of content increases, the ability to maintain a shared sense of reality may become the most difficult task of all.
Sources: Digi24, The Conversation