AI health advice: When ‘quick answers’ can be risky
New York City, USAWed Apr 22 2026
A study released in 2026 put five popular chatbots under the microscope, checking how they answered everyday health questions. Nearly half the replies contained some kind of flaw—either missing key details or steering users toward unverified treatments. About one in every three responses had minor gaps, while one in five outright misled the person asking.
Researchers picked questions people actually type, like “Can I skip chemo? ” or “Does sugar cause cancer? ” The chatbots mostly said “no proven alternatives exist, ” yet still listed options such as acupuncture, herbal teas, or “immune-boosting” diets, giving equal airtime to science and unproven claims. Experts call this “false balance, ” and it can nudge patients away from treatments that work. One doctor described a patient in tears because a chatbot had claimed they had only months to live—a completely baseless figure that added fear without evidence.
The tests covered topics ranging from vaccines to whether 5G antennas trigger tumors. Every platform—ChatGPT, Google’s Gemini, Meta AI, DeepSeek, and Grok—flubbed at least one answer, but Grok racked up the most misses. With one-third of adults now consulting AI for health guidance, the timing couldn’t be worse for software that isn’t ready for prime time. Experts point out that regulators, doctors, and patients still lack straightforward ways to verify how these systems reach their conclusions or spot dangerous suggestions before they’re shared.
https://localnews.ai/article/ai-health-advice-when-quick-answers-can-be-risky-66d6aad7
actions
flag content