Deepfake X‑Rays: Even Experts Can’t Tell the Difference

New York, USA, City,Thu Mar 26 2026
In a recent experiment, medical image specialists were tested on their ability to spot fake X‑ray pictures created by artificial intelligence. The study used 264 images, split evenly between real scans and computer‑made ones. Participants came from twelve hospitals in six countries and ranged from fresh graduates to veterans with forty years of experience. The test had two parts. First, radiologists reviewed a mixed set that included scans from different body areas and AI‑generated images produced by a chatbot tool. Second, they looked at chest X‑rays where half were authentic and the other half came from a generative AI model developed at Stanford. In both cases, doctors were initially told nothing about the presence of fakes. When no warning was given, only 41 % of the doctors correctly identified the synthetic images. Once they were told that some pictures were fake, their accuracy rose to 75 %. Yet the results varied widely: some specialists spotted between 58 % and 92 % of the fake scans. Experience did not help; years on the job were unrelated to detection skill, though those focusing on bone imaging did better than others.
The same pattern appeared in the AI systems that tried to spot fakes. Four large language models—GPT‑4o, GPT‑5, Gemini 2. 5 Pro and Llama 4 Maverick—managed between 57 % and 85 % accuracy on the chatbot‑generated images. For the chest X‑rays, doctors hit 62 % to 78 %, while the AIs ranged from 52 % to 89 %. Even the AI that produced the images could not find all of them. Researchers noted that synthetic X‑rays often look “too perfect. ” Smooth bones, straight spines, symmetrical lungs and oddly clean fractures are common tell‑tale signs. These clues suggest that AI models still lack the subtle irregularities present in real human anatomy. The findings raise alarms about potential misuse. A forged fracture could be used to manipulate legal claims, and a hacker who injects fake scans into hospital records might derail patient care. To guard against such threats, experts recommend embedding invisible watermarks or cryptographic signatures into images at the time of capture. This would let clinicians verify that a picture truly came from their own equipment. Looking ahead, the researchers warn that the problem will only grow as AI learns to generate 3D images like CT and MRI scans. They have released a public dataset of deepfake X‑rays with quizzes to train people in spotting fakes. Building awareness now could help keep medical records trustworthy.
https://localnews.ai/article/deepfake-xrays-even-experts-cant-tell-the-difference-3a67580f

actions