Evaluating Multimodal LLMs for Inpatient Diagnosis: Real-World Performance, Safety, and Cost Across Ten Frontier Models

arXiv cs.LG / 4/21/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The study (VALID) retrospectively evaluated 10 zero-shot multimodal LLMs on 539 real-world inpatient cases from a tertiary public hospital in South Africa, using multimodal inputs (imaging, reports, labs, notes, vitals) and expert-adjudicated ground truth.
  • LLMs’ diagnostic/safety performance was tightly clustered (under 15% variation) across models, meaning lower-cost systems performed similarly to the top-performing models despite major cost differences.
  • Compared with routine ward diagnoses, all evaluated LLMs achieved significantly better average diagnostic accuracy and patient safety scores, with performance verified through more than 10,000 jury-scored evaluations.
  • Adding radiology reports to the inputs improved performance by about 6%, and diagnostic quality and reasoning scores were strongly correlated (ρ = 0.85).
  • Output availability varied by model (about 65–100%) due to input constraints, and results were reported as robust across evaluation subsets and design choices.

Abstract

Background: Large language models (LLMs) are increasingly proposed for diagnostic support, but few evaluations use real-world multimodal inpatient data, particularly in low and middle-income country (LMIC) public hospitals. Methods: We conducted VALID, a retrospective evaluation of 539 multimodal inpatient cases from a tertiary public hospital in South Africa. Inputs included radiology imaging (CT, MRI, CXR) and reports, laboratory results, clinical notes, and vital signs. Expert panels adjudicated 300 cases (balanced and discordant subsets) to establish ground truth diagnoses, differentials, and reasoning. Ten multimodal LLMs generated zero-shot outputs. A calibrated three-model LLM Jury scored all outputs and routine ward diagnoses across diagnostic accuracy, differential quality, reasoning, and patient safety (>10,000 evaluations). Primary outcomes were composite scores (S_3, S_4) and win rates. Results: (i) LLM performance was tightly clustered (<15% variation) despite large cost differences; low-cost models performed comparably to top models. (ii) All LLMs significantly outperformed routine ward diagnoses on average diagnostic and safety scores. (iii) Top performance was achieved by GPT-5.1, followed by Gemini models. (vi) Adding radiology reports improved performance by 6%. (v) Diagnostic and reasoning scores were highly correlated (\rho = 0.85). (vi) Output rates varied (65-100%) due to input constraints. Results were robust across subsets and evaluation design. Conclusions: Across a real-world LMIC dataset, multimodal LLMs showed similar diagnostic performance despite large cost differences and outperformed routine care on average safety metrics. Affordability, robustness, and deployment constraints may outweigh marginal performance differences in LMIC settings.