Homepage News Scientists invented fake disease—and AI told users it was real

Scientists invented fake disease—and AI told users it was real

Scientists invented fake disease—and AI told users it was real
Shutterstock

A fake disease created by scientists was quickly picked up by major AI chatbots and even cited in research, exposing how easily misinformation can spread through artificial intelligence systems.

Others are reading now

A fictional illness created as an experiment was quickly picked up and repeated by major AI chatbots, exposing how easily false information can spread through artificial intelligence.

The case has raised concerns among researchers about the reliability of AI systems—especially when used for health advice.

According to Nature, researcher Almira Osmanovic Thunström and her team invented a condition called “bixonimania” to test whether AI models would treat fabricated information as fact.

They published fake academic papers describing the illness in 2024. Within weeks, chatbots began presenting it as real.

Designed to be obviously fake

The experiment was intentionally filled with red flags.

Also read

The fake studies listed a non-existent lead author, cited imaginary institutions and even included references to “Starfleet Academy” and “the USS Enterprise.”

Some passages explicitly stated “this entire paper is made up,” while others described “fifty made-up individuals” in the study.

Despite this, the material was absorbed into datasets used by AI systems.

AI spreads the fiction

Soon after, major chatbots began repeating the condition.

According to Nature, Microsoft Copilot described bixonimania as “an intriguing and relatively rare condition,” while Google’s Gemini linked it to blue-light exposure and suggested medical consultation.

Also read

Other platforms, including ChatGPT and Perplexity, also generated explanations, prevalence rates and symptom guidance—sometimes even when users did not explicitly ask about the condition.

From chatbots to journals

The misinformation did not stop with AI outputs.

The fake condition was later cited in peer-reviewed research, suggesting that some academics relied on AI-generated references without verifying them.

One such study was eventually retracted after the inclusion of the fictional illness was discovered.

Researchers say this shows how errors can move from AI systems into formal scientific literature.

Also read

Why it worked

Experts say the format played a key role.

AI models are more likely to trust and reproduce information that appears professionally written, particularly when it mimics academic or clinical documents.

“When the text looks professional and written as a doctor writes, there’s an increase in the hallucination rates,” one researcher told Nature.

The issue is compounded by inconsistent outputs, with the same AI system sometimes treating bixonimania as real and other times identifying it as fake, depending on how questions are phrased.

A warning beyond AI

The experiment has sparked broader concern about how misinformation spreads.

Also read

“It looks funny, but hold on, we have a problem here,” said Alex Ruani, a researcher in health misinformation, who described the case as a “masterclass” in how false information can circulate.

The findings highlight risks not just in AI systems, but also in how humans use them—particularly in fields like medicine, where accuracy is critical.

Sources: Nature

Ads by MGDK