The Man Who Swapped Salt for Bromide After Asking ChatGPT
- Paul Francis
- 1 hour ago
- 3 min read
It sounds like the set-up to a surreal joke: a 60-year-old man, looking to cut down on table salt, asked an artificial intelligence for alternatives and wound up in hospital after months of dosing himself with sodium bromide. Yet that is precisely what happened in the United States, according to a medical case report that has since sparked a flurry of concern about how people use AI for health advice.

What Has the Online Safety Act Done So Far?
The case, published in Annals of Internal Medicine: Clinical Cases, describes how the man, worried about the health effects of sodium chloride, decided to find a replacement. Instead of speaking to his doctor or a dietitian, he turned to ChatGPT. He later told clinicians that the system suggested bromide as a substitute. He then bought it online and sprinkled it onto food for around three months.
When he eventually sought help, doctors found he was suffering from bromism, a rare form of poisoning that was more common decades ago, when bromide salts were sold as sedatives. Today, bromide compounds are not approved for human consumption in most countries.
What Are the Symptoms of Bromism?
Over time, the man developed a catalogue of troubling symptoms:
paranoia, including a belief that his neighbour was poisoning him
hallucinations
insomnia and fatigue
poor coordination and unsteady movement
skin complaints including acne and red bumps known as cherry angiomas
In blood tests, his chloride levels appeared abnormally high. In reality, bromide was interfering with the equipment — a diagnostic red herring that once led to bromism being nicknamed a “great imitator” in medicine.
How Was He Treated?
The man was admitted to hospital, where he was placed under psychiatric care due to his paranoia and hallucinations. Treatment included intravenous fluids to flush the bromide, correction of his electrolyte levels, and the use of antipsychotic medication. After three weeks, his condition improved and he was discharged.
Doctors noted that many younger clinicians had little experience with bromism, since the condition has all but disappeared from modern practice. Without his disclosure about the AI-recommended substitution, diagnosis might have been even more difficult.
Did ChatGPT Really Recommend Bromide?
The clinicians never obtained the original conversation logs, so it is impossible to prove exactly what the system said. However, when the team ran similar prompts themselves, they found that ChatGPT sometimes did list sodium bromide as a possible substitute for sodium chloride, alongside caveats such as “context matters” and without asking for medical history.
This raises awkward questions about how AI language models generate answers. They are designed to predict plausible text, not to provide safe or medically sound advice.
What Are the Lessons?
The case highlights three broader concerns:
AI is not a doctor. It may generate convincing answers, but it does not understand chemistry, biology, or risk in the way a professional does.
Guardrails are limited. While OpenAI and others build safeguards into their systems, loopholes remain, especially for niche queries.
Doctors may need to ask new questions. Just as they might ask patients about herbal remedies or over-the-counter pills, clinicians may increasingly need to ask: “Have you consulted an AI about this?”
For the man at the centre of this story, the outcome was ultimately positive, after a frightening spell in hospital, he made a recovery. But for the wider public, the case stands as a reminder: artificial intelligence can be a helpful tool, but when it comes to your health, it is no substitute for professional medical advice.