Can LLMs Score Medical Diagnoses and Clinical Reasoning as well as Expert Panels?

arXiv cs.LG / 4/17/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The study tests a “LLM jury” of three frontier AI models to score 3,333 medical diagnoses from real hospital cases, aiming to replace costly, slow expert clinician panels as an adjudicator.
  • The LLM jury evaluates not only the final diagnosis but also differential diagnosis, clinical reasoning, and negative treatment risk, with results benchmarked against expert panels and a separate human re-scoring panel.
  • Findings show the uncalibrated LLM jury scores are systematically lower than clinician panel scores, but it preserves ordinal agreement, matches expert panel rankings very well, and has a lower probability of severe safety errors than the human re-scoring panel.
  • The research also finds that combining LLM-jury scoring with AI-generated diagnoses can flag high-risk ward diagnoses for targeted expert review, improving panel efficiency.
  • Post-hoc calibration using isotonic regression significantly improves alignment between the calibrated LLM jury and human expert panel evaluations, and the models show no self-preference bias toward diagnoses generated by their own underlying model or same-vendor models.

Abstract

Evaluating medical AI systems using expert clinician panels is costly and slow, motivating the use of large language models (LLMs) as alternative adjudicators. Here, we evaluate an LLM jury composed of three frontier AI models scoring 3333 diagnoses on 300 real-world middle-income country (MIC) hospital cases. Model performance was benchmarked against expert clinician panel and independent human re-scoring panel evaluations. Both LLM and clinician-generated diagnoses are scored across four dimensions: diagnosis, differential diagnosis, clinical reasoning and negative treatment risk. For each of these, we assess scoring difference, inter-rater agreement, scoring stability, severe safety errors and the effect of post-hoc calibration. We find that: (i) the uncalibrated LLM jury scores are systematically lower than clinician panels scores; (ii) the LLM Jury preserves ordinal agreement and exhibits better concordance with the primary expert panels than the human expert re-score panels do; (iii) the probability of severe errors is lower in \lj models compared to the human expert re-score panels; (iv) the LLM Jury shows excellent agreement with primary expert panels' rankings. We find that the LLM jury combined with AI model diagnoses can be used to identify ward diagnoses at high risk of error, enabling targeted expert review and improved panel efficiency; (v) LLM jury models show no self-preference bias. They did not score diagnoses generated by their own underlying model or models from the same vendor more (or less) favourably than those generated by other models. Finally, we demonstrate that LLM jury calibration using isotonic regression improves alignment with human expert panel evaluations. Together, these results provide compelling evidence that a calibrated, multi-model LLM jury can serve as a trustworthy and reliable proxy for expert clinician evaluation in medical AI benchmarking.