Homepage News OpenAI Blocks Suspected Russian Propaganda Network Exploiting AI

OpenAI Blocks Suspected Russian Propaganda Network Exploiting AI

Russia, propaganda, tv
Shutterstock.com

OpenAI said it has shut down a cluster of ChatGPT accounts allegedly tied to a Russian influence operation, cutting off access to its AI tools.

Others are reading now

OpenAI said it has shut down a cluster of ChatGPT accounts allegedly tied to a Russian influence operation, cutting off access to its AI tools.

The action was outlined in a company report describing how its models were used to support coordinated online activity.

content farm tactics

In its Feb. 26 report, “Disrupting malicious uses of our models,” OpenAI detailed a case study dubbed “Fish Food” involving accounts connected to the Russian project known as “Rybar.”

According to the company, the network used ChatGPT to produce social media posts in Russian, English and Spanish. Some of that content was later published through accounts carrying the “Rybar” brand.

OpenAI said the pattern resembled a content farm built for broad distribution. In one example cited, a single prompt generated seven draft posts, six of which were subsequently published across different X accounts.

Also read

africa-focused planning

The report said the primary account also relied on ChatGPT to draft proposals and outline influence services.

These included translating into English a list of services the network could offer, such as managing X and Telegram accounts and operating a bilingual investigative website focused on Africa.

OpenAI said planning documents referenced paid placements in French-language media, building amplifier networks and election-related strategies in countries including the Democratic Republic of Congo, Burundi, Cameroon and Madagascar. One proposed concept carried an estimated annual budget of up to $600,000.

access revoked

OpenAI said it banned the accounts linked to the “Rybar” network and shared indicators tied to the activity with other companies and relevant stakeholders to help curb further spread of the material.

The disclosure underscores broader concerns about the misuse of generative AI tools in coordinated influence operations, including the use of synthetic media and impersonation tactics.

Also read

Sources: OpenAI, “Disrupting malicious uses of our models,” Feb. 26 report

Ads by MGDK