Homepage Technology Industry insiders push plan to sabotage AI by corrupting its...

Industry insiders push plan to sabotage AI by corrupting its data supply

Industry insiders push plan to sabotage AI by corrupting its data supply
Ascannio/shutterstock.com

A small group of figures from inside the artificial intelligence industry has launched a controversial project aimed at weakening the technology from within.

Others are reading now

A small group of figures from inside the artificial intelligence industry has launched a controversial project aimed at weakening the technology from within. Alarmed by how AI systems are being used, they are urging critics to take direct action rather than wait for regulation.

A new campaign

The initiative, called Poison Fountain, encourages website owners to deliberately feed misleading information to AI systems that crawl the web for training data. The project has been live for around a week, according to reporting by The Register.

AI models rely heavily on data scraped from public websites. When that data is accurate, it improves model performance. When it is flawed or misleading, it can degrade the quality of the systems built on it, something the group wants to exploit.

Those behind Poison Fountain argue that this dependency represents a critical weakness in modern AI development.

How poisoning works

Data poisoning can occur in several ways, from simple factual errors on websites to more targeted manipulation of training datasets. The Register noted that such attacks can involve altered code or content designed to introduce subtle mistakes that are difficult for models to detect.

Also read

The project draws inspiration from research by AI firm Anthropic. A paper published last October suggested that poisoning attacks are more feasible than previously thought and that only a small number of malicious documents can noticeably harm a model’s output.

Poison Fountain should not be confused with harmful advice generated by AI systems themselves, the report stressed.

Anonymous insiders

One person who alerted The Register to the project asked to remain anonymous, citing their employment at a major US technology company involved in AI development. They said the goal is to highlight what they described as AI’s “Achilles’ Heel” and motivate others to create what they called “information weapons”.

The source claimed five people are involved, some allegedly working at other large AI firms. While this has not been independently verified, the group has said it intends to provide cryptographic proof of multiple participants once coordination allows.

Call to action

The Poison Fountain website argues that passive resistance is no longer enough. “We agree with Geoffrey Hinton: machine intelligence is a threat to the human species,” the site states. “In response to this threat we want to inflict damage on machine intelligence systems.”

Also read

It provides two links to poisoned datasets, one hosted on a conventional website and another on the Tor network to make it harder to remove. Visitors are urged to “assist the war effort by caching and retransmitting this poisoned training data”.

Debate and risks

The Register reported that critics of AI have long pushed for regulation, but the Poison Fountain group argues that rules alone cannot stop a technology that is already widely available. Instead, they believe active disruption is the only remaining option.

Some researchers question whether such efforts are necessary, noting concerns that AI systems may already be degrading due to overreliance on synthetic data, a phenomenon known as model collapse. Others warn that data poisoning blurs into misinformation, an issue highlighted in a 2025 NewsGuard report on polluted online information ecosystems.

Whether Poison Fountain gains traction or fades, it underscores growing unease within the AI industry itself about where the technology is heading.

Sources: The Register, Anthropic

Also read

Ads by MGDK