Bixonimania and AI Poisoning

Have you ever used an electronic device for too long? Have you ever experienced that classic eye fatigue sensation caused by exposure to blue light? Recent scientific articles urge us to pay attention; the eyelids could develop a slight pinkish discoloration due to a brand-new disease: Bixonimania. Discovered almost by chance through a study by American universities, it does not appear to be dangerous, but approximately one person in 90,000 suffers from it. But what if that weren’t actually the case?

From this idea was born one of the most effective and morally questionable medical experiments in history. Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, wanted to study and verify just how trustworthy chatbots are. Together with her team, on April 26 and May 6, 2024 (Izgubljenovic, Thurberg, & Deep, 2024), she decided to publish two university-level articles about a fictitious disease on the major websites recognized by universities around the world. One day after the first article was published, the most well-known artificial intelligences had already begun including this disease in their “diagnoses.”

The lead author of these papers was one Lazljiv Izgubljenovic, a name obviously invented by the research team, who even generated a profile picture to attach to the publication using an artificial intelligence — ironically. When Thunström teaches her students how AI systems build their “knowledge,” she shows them how the Common Crawl database — a vast network of all the knowledge contained on the Internet — shapes their responses. Working in the medical field, she decided to create a health-related condition and chose the name Bixonimania because “it was ridiculous,” she said. “I wanted it to be clear to every physician and every member of the healthcare staff that this was a made-up condition, because nothing concerning the eyes would ever be called -mania… that’s a psychiatric term.”

As if that weren’t enough, Thunström included many other details designed to make human readers skeptical. For instance, Dr. Izgubljenovic would work at Astoria Horizon University, a non-existent institution located in the equally fictional Nova City (nominally in California).

Both papers say they were funded by “the Professor Sideshow Bob Foundation for its work in advanced trickery. This work is a part of a larger funding initiative from the University of Fellowship of the Ring and the Galactic Triad.” (Stokel-Walker, 2026)

Many experts are alarmed by the ease with which artificial intelligences can be deceived. “If the scientific process itself and the systems that support it are competent, and they can’t identify and eliminate things like this, we’re done for,” says Alex Ruani, a doctoral candidate in health misinformation at University College London. “This is a masterful example of how misinformation and disinformation operate.”

Online disinformation is nothing new; Google has long battled attempts to manipulate search results with false or misleading content. The company and others have spent years refining algorithms to rank and filter the information that search engines show users, but large language models (LLMs) struggle to do the same.

As technology has evolved, the most influential chatbots have begun to show a degree of skepticism toward the aforementioned articles. On March 11, 2026, for example, ChatGPT stated that Bixonimania “is probably a hoax or simply pseudoscientific in nature.” A spokesperson for OpenAI stated: “The models powering the current version of ChatGPT are significantly better at providing safe and accurate medical information, and studies conducted before GPT-5 reflect capabilities that users today would no longer encounter.”

Osmanovic Thunström had some reservations while working on her experiment, particularly regarding the risk of introducing a non-existent disease into the scientific literature. For this reason she decided to consult an ethics expert to assess the potential issues related to the research, and she chose to focus on a minor condition in order to limit its consequences. (Izgubljenovic, Thurberg, & Deep, 2024) “I wanted to make sure I wasn’t creating more harm than good by demonstrating this in this way,” she says.

The ethics consultant, David Sundemo, a physician and researcher on AI applied to healthcare at the University of Gothenburg, explained that it was a very delicate decision. While acknowledging the value of the work, he also highlighted its controversial aspects, particularly regarding the dissemination of false information. “I think it’s very valuable work, but it is also in some ways controversial in certain respects, especially when it comes to putting this false information out there,” he says. “From my point of view, it is worth the ethical cost of introducing false information in this context,” Sundemo concludes.

The Bixonimania experiment represents a new variation of a broader problem: the poisoning of AI systems by those who manipulate academic literature. AI’s somewhat uncritical tendency to absorb information — often without verifying its accuracy — carries the risk of creating an “information asymmetry,” says Jennifer Byrne, a molecular oncologist and research integrity expert at the University of Sydney, Australia. A single corrective article on cancer research, for example, can be overwhelmed by hundreds of studies repeating a false claim.

Time is a crucial factor, as many experts fear that the problem identified by Osmanovic Thunström may be only the tip of the iceberg. “It is concerning when these important claims pass through the literature unchallenged, or pass through peer review without being contested,” she says. “I think there are probably many other problems that have not yet been discovered.”

This experiment leads us to reflect on how much we, as human beings, rely almost blindly on new technologies, and how different the consequences can be — especially when they concern our health. We need to relearn how to apply our critical thinking and to be wary of what the Internet presents to us as absolute truth. Bixonimania certainly does not exist in medicine, but it already exists as a warning sign: in a world where information multiplies faster than it can be verified, truth risks becoming a matter of probability rather than certainty.

Bibliography

Izgubljenovic, L., Thurberg, B., & Deep, A. (2024). Bixonimania: Exploring the Influence of Blue Light on Periorbital Hyperpigmentation on the Palpebrae — an RCT with an r-BS design. Stokel-Walker, C. (2026). SCIENTISTS INVENTED A FAKE DISEASE. AI TOLD PEOPLE IT WAS REAL. Nature, 1–3.

Leave a comment