Medical Reasoning with Large Language Models: A Survey and MR-Bench

arXiv cs.AI / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper surveys how large language models can support medical reasoning, emphasizing that clinical decision-making requires robust reasoning beyond factual recall.
  • It frames medical reasoning as an iterative loop of abduction, deduction, and induction, and organizes existing approaches into seven technical routes (covering both training-based and training-free methods).
  • The authors run a unified cross-benchmark evaluation of representative medical reasoning models under consistent settings to improve comparability across prior work.
  • They introduce MR-Bench, a new benchmark derived from real hospital data, to better measure clinically grounded reasoning.
  • Results on MR-Bench reveal a substantial gap between strong performance on exam-style tasks and accuracy on authentic clinical decision-making tasks.

Abstract

Large language models (LLMs) have achieved strong performance on medical exam-style tasks, motivating growing interest in their deployment in real-world clinical settings. However, clinical decision-making is inherently safety-critical, context-dependent, and conducted under evolving evidence. In such situations, reliable LLM performance depends not on factual recall alone, but on robust medical reasoning. In this work, we present a comprehensive review of medical reasoning with LLMs. Grounded in cognitive theories of clinical reasoning, we conceptualize medical reasoning as an iterative process of abduction, deduction, and induction, and organize existing methods into seven major technical routes spanning training-based and training-free approaches. We further conduct a unified cross-benchmark evaluation of representative medical reasoning models under a consistent experimental setting, enabling a more systematic and comparable assessment of the empirical impact of existing methods. To better assess clinically grounded reasoning, we introduce MR-Bench, a benchmark derived from real-world hospital data. Evaluations on MR-Bench expose a pronounced gap between exam-level performance and accuracy on authentic clinical decision tasks. Overall, this survey provides a unified view of existing medical reasoning methods, benchmarks, and evaluation practices, and highlights key gaps between current model performance and the requirements of real-world clinical reasoning.