Homepage News How AI became a weapon in Ukraine’s information war

How AI became a weapon in Ukraine’s information war

Artificial Intelligence
Shutterstock

A young Ukrainian soldier appears on screen, sobbing as he describes being forced into combat. The video spread rapidly, drawing outrage and sympathy across social media platforms.

Others are reading now

According to EFE, cited by Agerpres and reported by Digi24.ro, the man was not a soldier at all. The footage was generated using artificial intelligence, reportedly built on the face of a Russian content creator who has spoken out against President Vladimir Putin’s regime.

The episode highlights how the war in Ukraine, now entering its fifth year, is increasingly shaped by digital manipulation as much as by events on the ground.

Synthetic Reality

EFE reported that advances in generative AI have made fabricated videos far more convincing than the crude deepfakes seen in the early months of the invasion. Minor inconsistencies in uniforms or insignia may be the only visible clues.

Olga Petriv, an artificial intelligence law specialist at Ukraine’s Center for Democracy and the Rule of Law, warned that modern tools can produce material so realistic that detecting fraud often requires technical analysis rather than casual viewing.

She cautioned that the spread of artificial imagery risks fostering what she called a “presumption of falsity”, where authentic wartime evidence can be dismissed as fabricated.

Also read

“They can now deny even undeniable evidence of their crimes (for example, authentic images from the front lines) simply by labeling them as ‘AI-generated'”.

Manipulating Machines

Visual deception is only part of the strategy. EFE also reported that disinformation actors are attempting to influence large language models, including chatbots such as ChatGPT, through a method known as LLM Grooming.

The tactic relies on overwhelming the internet with coordinated false content so that AI systems absorb and later reproduce misleading narratives in their answers. Petriv explained, in remarks cited by EFE, that this represents a shift from individually crafted fake stories to highly automated, large-scale operations.

Investigations referenced by EFE describe the so-called Pravda network as an example of this ecosystem. The network, active in around 50 countries, publishes vast quantities of material aligned with pro-Russian narratives. By generating volume and attracting links from blogs and other websites, such platforms can increase the likelihood that search engines and AI models treat the content as credible sources.

The multilingual nature of some of these materials, appearing in major European languages, further indicates coordinated dissemination designed to reach audiences beyond Ukraine and Russia.

Also read

Beyond AI-focused tactics, EFE noted that recent disinformation campaigns have also sought to discourage foreign fighters, particularly Colombians, from supporting Ukraine and to complicate peace efforts.

As digital tools grow more sophisticated, the information space surrounding the conflict is becoming another contested front, where perception itself is strategically targeted.

Sources: EFE, Agerpres, Digi24.ro

Ads by MGDK