A Reasoning-Enabled Vision-Language Foundation Model for Chest X-ray Interpretation

arXiv cs.CV / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces CheXOne, a reasoning-enabled vision-language foundation model designed for chest X-ray interpretation that produces both diagnostic predictions and explicit, clinically grounded reasoning traces linking visual evidence to findings.
  • CheXOne is trained using 14.7M instruction/reasoning samples curated from 30 public datasets covering 36 CXR interpretation tasks, using a two-stage training approach (instruction tuning plus reinforcement learning) to improve reasoning quality.
  • In zero-shot evaluations across multiple modalities—visual question answering, report generation, visual grounding, and reasoning assessment across 17 settings—CheXOne outperforms prior medical and general-domain foundation models on public benchmarks.
  • A clinical reader study reports that CheXOne-drafted reports are comparable to or better than resident-written reports in 55% of cases, while also improving clinical indication coverage and report-writing/interpretation efficiency.
  • Follow-up radiologist analyses suggest the generated reasoning traces are clinically factually and provide causal support for the final predictions, supporting interpretability and potential real-world utility of explicit reasoning.

Abstract

Chest X-rays (CXRs) are among the most frequently performed imaging examinations worldwide, yet rising imaging volumes increase radiologist workload and the risk of diagnostic errors. Although artificial intelligence (AI) systems have shown promise for CXR interpretation, most generate only final predictions, without making explicit how visual evidence is translated into radiographic findings and diagnostic predictions. We present CheXOne, a reasoning-enabled vision-language model for CXR interpretation. CheXOne jointly generates diagnostic predictions and explicit, clinically grounded reasoning traces that connect visual evidence, radiographic findings, and these predictions. The model is trained on 14.7 million instruction and reasoning samples curated from 30 public datasets spanning 36 CXR interpretation tasks, using a two-stage framework that combines instruction tuning with reinforcement learning to improve reasoning quality. We evaluate CheXOne in zero-shot settings across visual question answering, report generation, visual grounding and reasoning assessment, covering 17 evaluation settings. CheXOne outperforms existing medical and general-domain foundation models and achieves strong performance on independent public benchmarks. A clinical reader study demonstrates that CheXOne-drafted reports are comparable to or better than resident-written reports in 55% of cases, while effectively addressing clinical indications and enhancing both report writing and CXR interpretation efficiency. Further analyses involving radiologists reveal that the generated reasoning traces show high clinical factuality and provide causal support for the final predictions, offering a plausible explanation for the performance gains. These results suggest that explicit reasoning can improve model performance, interpretability and clinical utility in AI-assisted CXR interpretation.