AI Doctors Learn Fake Diseases from Made-Up Research

University of Gothenburg, SwedenTue Apr 14 2026
Back in 2024, a Swedish team wanted to test if AI chatbots could distinguish real science from nonsense. They created "bixonimania, " a fake eye disease, and uploaded two completely fake research papers to a public database. The papers had obvious red flags—like a fictional author and references to Starfleet Academy—but within weeks, major AI systems started treating bixonimania as a real diagnosis. Soon, chatbots like Microsoft Bing’s Copilot and Google’s Gemini began giving confident answers about this imaginary condition. Some even advised users to visit an ophthalmologist, treating a joke as real medical advice. The experiment proved that AI doesn’t just regurgitate facts—it spreads misinformation with alarming ease.
Worse yet, the fake research was cited in an actual peer-reviewed journal before anyone noticed. A team of Indian researchers included the bixonimania papers in their own study, which was later retracted. This shows that AI-generated nonsense isn’t just tricking machines—it’s fooling real scientists too. The bigger picture is even scarier. AI health advice is trusted by millions daily, but studies have found chatbots giving wrong diagnoses, pushing unnecessary tests, and even inventing body parts. Since real doctors are expensive and hard to access, patients are turning to AI for answers—with potentially dangerous results. What started as a clever experiment exposed a serious flaw: AI relies on flawed data, and that flaw spreads quickly. False information cycles back into real-world "science, " creating a feedback loop of bad advice. The joke isn’t funny anymore when lives could be on the line.
https://localnews.ai/article/ai-doctors-learn-fake-diseases-from-made-up-research-192d0f7

actions