VERT: Reliable LLM Judges for Radiology Report Evaluation

arXiv cs.AI / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces VERT, an LLM-based metric for evaluating radiology reports, addressing uncertainty about how well prior LLM-judge approaches generalize across different imaging modalities and anatomies.
  • It performs a comprehensive correlation study between expert radiologist ratings and LLM judge outputs, comparing RadFact, GREEN, FineRadScore, and VERT using open/closed-source models with varying sizes and reasoning capabilities.
  • Experiments on the RadEval and RaTE-Eval datasets evaluate few-shot prompting, ensembling, and parameter-efficient fine-tuning (with RaTE-Eval as a focus) to determine effective judge configurations.
  • Results indicate VERT improves correlation with radiologist judgments by up to 11.7% relative to GREEN, and that fine-tuning Qwen3 30B can achieve up to 25% gains with only 1,300 samples.
  • The study also includes systematic error analysis to characterize where LLM metrics align or diverge from expert judgments and reports that fine-tuning can reduce inference time by up to 37.2×.

Abstract

Current literature on radiology report evaluation has focused primarily on designing LLM-based metrics and fine-tuning small models for chest X-rays. However, it remains unclear whether these approaches are robust when applied to reports from other modalities and anatomies. Which model and prompt configurations are best suited to serve as LLM judges for radiology evaluation? We conduct a thorough correlation analysis between expert and LLM-based ratings. We compare three existing LLM-as-a-judge metrics (RadFact, GREEN, and FineRadScore) alongside VERT, our proposed LLM-based metric, using open- and closed-source models (reasoning and non-reasoning) of different sizes across two expert-annotated datasets, RadEval and RaTE-Eval, spanning multiple modalities and anatomies. We further evaluate few-shot approaches, ensembling, and parameter-efficient fine-tuning using RaTE-Eval. To better understand metric behavior, we perform a systematic error detection and categorization study to assess alignment of these metrics against expert judgments and identify areas of lower and higher agreement. Our results show that VERT improves correlation with radiologist judgments by up to 11.7% relative to GREEN. Furthermore, fine-tuning Qwen3 30B yield gains of up to 25% using only 1,300 training samples. The fine-tuned model also reduces inference time up to 37.2 times. These findings highlight the effectiveness of LLM-based judges and demonstrate that reliable evaluation can be achieved with lightweight adaptation.