Case-Specific Rubrics for Clinical AI Evaluation: Methodology, Validation, and LLM-Clinician Agreement Across 823 Encounters

arXiv cs.AI / 4/28/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a case-specific, clinician-authored rubric methodology to evaluate clinical AI outputs in a way that supports iterative, safe deployment without requiring expensive expert scoring per instance.
  • Using 20 clinicians who authored 1,646 rubrics for 823 clinical encounters, the researchers validate the approach by checking that an LLM-based scoring agent consistently ranks clinician-preferred outputs above rejected ones.
  • Across seven versions of an EHR-embedded AI agent, clinician-authored rubrics strongly separated high- versus low-quality outputs (median score gap 82.9%) with very high scoring stability, and median scores improved from 84% to 95%.
  • In later experiments, agreement between clinician and LLM-based rankings (Kendall’s tau ~0.42–0.46) matched or exceeded agreement between clinicians themselves (tau ~0.38–0.43), suggesting LLM-generated rubrics can approximate clinician consensus.
  • The authors argue that pairing LLM rubrics with ongoing clinician authorship can dramatically expand evaluation coverage at ~1,000× lower cost, while noting that “ceiling compression” may limit future inter-rater agreement measurement.

Abstract

Objective. Clinical AI documentation systems require evaluation methodologies that are clinically valid, economically viable, and sensitive to iterative changes. Methods requiring expert review per scoring instance are too slow and expensive for safe, iterative deployment. We present a case-specific, clinician-authored rubric methodology for clinical AI evaluation and examine whether LLM-generated rubrics can approximate clinician agreement. Materials and Methods. Twenty clinicians authored 1,646 rubrics for 823 clinical cases (736 real-world, 87 synthetic) across primary care, psychiatry, oncology, and behavioral health. Each rubric was validated by confirming that an LLM-based scoring agent consistently scored clinician-preferred outputs higher than rejected ones. Seven versions of an EHR-embedded AI agent for clinicians were evaluated across all cases. Results. Clinician-authored rubrics discriminated effectively between high- and low-quality outputs (median score gap: 82.9%) with high scoring stability (median range: 0.00%). Median scores improved from 84% to 95%. In later experiments, clinician-LLM ranking agreement (tau: 0.42-0.46) matched or exceeded clinician-clinician agreement (tau: 0.38-0.43), attributable to both ceiling compression and LLM rubric improvement. Discussion. This convergence supports incorporating LLM rubrics alongside clinician-authored ones. At roughly 1,000 times lower cost, LLM rubrics enable substantially greater evaluation coverage, while continued clinical authorship grounds evaluation in expert judgment. Ceiling compression poses a methodological challenge for future inter-rater agreement studies. Conclusion. Case-specific rubrics offer a path for clinical AI evaluation that preserves expert judgment while enabling automation at three orders lower cost. Clinician-authored rubrics establish the baseline against which LLM rubrics are validated.