CheXthought: A global multimodal dataset of clinical chain-of-thought reasoning and visual attention for chest X-ray interpretation

arXiv cs.AI / 4/30/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces CheXthought, a global multimodal clinical dataset containing 103,592 chain-of-thought reasoning traces and 6.6M synchronized visual attention annotations from 50,312 multi-read chest X-rays collected from 501 radiologists across 71 countries.
  • The authors report that models using CheXthought reasoning outperform existing vision-language model chain-of-thought approaches in factual accuracy and spatial grounding for chest X-ray interpretation.
  • They show that incorporating visual attention as an inference-time hint helps recover missed findings and reduces hallucinations.
  • Training with CheXthought is claimed to improve pathology classification, visual faithfulness, temporal reasoning, and uncertainty communication, including the ability to predict human–human and human–AI disagreement from images.
  • Overall, the dataset is positioned as a resource for developing more transparent and interpretable multimodal vision–language systems for clinical reasoning.

Abstract

Chest X-ray interpretation is one of the most frequently performed diagnostic tasks in medicine and a primary target for AI development, yet current vision--language models are primarily trained on datasets of paired images and reports, not the cognitive processes and visual attention that underlie clinical reasoning. Here, we present CheXthought, a global, multimodal resource containing 103,592 chain-of-thought reasoning traces and 6,609,082 synchronized visual attention annotations across 50,312 multi-read chest X-rays from 501 radiologists in 71 countries. Our analysis reveals clinical reasoning patterns in how experts deploy distinct visual search strategies, integrate clinical context, and communicate uncertainty. We demonstrate the clinical utility of CheXthought across four dimensions. First, CheXthought reasoning significantly outperforms state--of--the--art vision--language model chain-of-thought in factual accuracy and spatial grounding. Second, visual attention data used as an inference--time hint recovers missed findings and significantly reduces hallucinations. Third, models trained on CheXthought data achieve significantly stronger pathology classification, visual faithfulness, temporal reasoning and uncertainty communication. Fourth, leveraging CheXthought's multi-reader annotations, we predict both human--human and human--AI disagreement directly from an image, enabling transparent communication of case difficulty, uncertainty and model reliability. These findings establish CheXthought as a resource for advancing multimodal clinical reasoning and the development of more transparent, interpretable vision--language models.