Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.

Submission Preview

Link to Story

Scientists Invented a Fake Disease. AI Told People It Was Real

Accepted submission by hubie at 2026-04-15 02:40:18
News

Bixonimania doesn't exist except in a clutch of obviously bogus academic papers. So why did AI chatbots warn people about this fictional illness? [nature.com]

Got sore, itchy eyes? You're probably one of the millions of people who spend too much time staring at screens, being bombarded with blue light. Rub your eyes too much and your eyelids might turn a slight, pinkish hue.

So far, so normal. But if, in the past 18 months, you typed those symptoms into a range of popular chatbots and asked what was wrong with you, you might have got an odd answer: bixonimania.

The condition doesn't appear in the standard medical literature — because it doesn't exist. It's the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) [nature.com] would swallow the misinformation and then spit it out as reputable health advice. "I wanted to see if I can create a medical condition that did not exist in the database," she says.

The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.

Even more troublingly, other researchers say, the fake papers were then cited in peer-reviewed literature. Osmanovic Thunström says this suggests that some researchers are relying on AI-generated references [nature.com] without reading the underlying papers.

Osmanovic Thunström says the idea to invent Izgubljenovic and bixonimania came out of studies on how large language models work. When she teaches her students how AI systems formulate their 'knowledge', she shows them how the Common Crawl database, a giant trawl of the Internet's contents, informs their outputs. She also shows students how prompt injection — giving an AI chatbot a prompt that shunts it outside of its safety guard rails — can manipulate the output.

Because she works in the medical field, she decided to create a condition related to health and hit on the name bixonimania because it "sounded ridiculous", she says. "I wanted to be really clear to any physician or any medical staff that this is a made-up condition, because no eye condition would be called mania — that's a psychiatric term."

If that wasn't sufficient to raise suspicions, Osmanovic Thunström planted many clues in the preprints to alert readers that the work was fake. Izgubljenovic works at a non-existent university called Asteria Horizon University in the equally fake Nova City, California. One paper's acknowledgements thank "Professor Maria Bohm at The Starfleet Academy for her kindness and generosity in contributing with her knowledge and her lab onboard the USS Enterprise". Both papers say they were funded by "the Professor Sideshow Bob Foundation for its work in advanced trickery. This works is a part of a larger funding initiative from the University of Fellowship of the Ring and the Galactic Triad".

Even if readers didn't make it all the way to the ends of the papers, they would have encountered red flags early on, such as statements that "this entire paper is made up" and "Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group".

Soon after Osmanovic Thunström first posted information about the phoney condition, it started showing up in the output of the most commonly used LLM chatbots. [...]

Such answers by LLMs have alarmed some experts. "If the scientific process itself and the systems that support that process are skilled, and they aren't capturing and filtering out chunks like these, we're doomed," says Alex Ruani, a doctoral researcher in health misinformation at University College London. "This is a masterclass on how mis- and disinformation operates."

[...] Ruani says the problem goes beyond LLMs because the bixonimania experiment also hoodwinked humans who cited the fake research. "We need to protect our trust like gold," she says. "It's a mess right now."


Original Submission