When Metrics Disagree: Automatic Similarity vs. LLM-as-a-Judge for Clinical Dialogue Evaluation

arXiv cs.CL / 4/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper reports a LoRA domain-adaptation of Llama-2-7B using real patient–physician transcripts to improve clinical dialogue responses while retaining the base model’s knowledge.
  • Model evaluation is done in two tracks: traditional lexical overlap metrics (BLEU/ROUGE) and an “LLM-as-a-Judge” approach where GPT-4 scores semantic quality.
  • Results show the LoRA model improves substantially on lexical metrics, but GPT-4 judge scores exhibit a notable disagreement that only slightly favors the baseline’s conversational flow.
  • The authors conclude that automatic metrics—whether lexical measures or LLM-based judges—may not reliably reflect clinical utility, highlighting the need for thorough human medical expert validation.
  • The study frames metric disagreement as a safety-critical issue for healthcare LLM deployment and positions expert review as an indispensable final step.

Abstract

As Large Language Models (LLMs) are increasingly integrated into healthcare to address complex inquiries, ensuring their reliability remains a critical challenge. Recent studies have highlighted that generic LLMs often struggle in clinical contexts, occasionally producing misleading guidance. To mitigate these risks, this research focuses on the domain-specific adaptation of \textbf{Llama-2-7B} using the \textbf{Low-Rank Adaptation (LoRA)} technique. By injecting trainable low-rank matrices into the Transformer layers, we efficiently adapted the model using authentic patient-physician transcripts while preserving the foundational knowledge of the base model. Our objective was to enhance precision and contextual relevance in responding to medical queries by capturing the specialized nuances of clinical discourse. Due to the resource-intensive nature of large-scale human validation, the model's performance was evaluated through a dual-track framework: \textbf{Track A} utilized traditional lexical similarity metrics (e.g., BLEU, ROUGE), while \textbf{Track B} employed an "LLM-as-a-Judge" paradigm using GPT-4 for semantic assessment. Our results demonstrate that while the LoRA-enhanced model achieved significant improvements across all quantitative lexical dimensions, a profound disagreement surfaced in the GPT-4 evaluation, which marginally favored the baseline model's conversational flow. This metric divergence underscores a pivotal finding: traditional automated scores may not fully reflect clinical utility. Consequently, we propose that while automated metrics and LLM judges serve as valuable developmental proxies, rigorous validation by human medical experts remains an indispensable requirement for the safe deployment of LLMs in healthcare settings.