WHBench: Evaluating Frontier LLMs with Expert-in-the-Loop Validation on Women's Health Topics

arXiv cs.AI / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces WHBench, a women’s health–focused evaluation suite with 47 expert-crafted scenarios spanning 10 topics to uncover clinically meaningful LLM failure modes such as outdated guidance and dosing errors.
  • It evaluates 22 frontier LLMs using a 23-criterion rubric covering clinical accuracy, safety, completeness, communication, instruction following, equity, uncertainty handling, and guideline adherence, with safety-weighted scoring and server-side recalculation.
  • Across 3,102 attempted responses, no model exceeds 75% mean performance, with the best at 72.1%, and results show low fully-correct rates plus meaningful variation in harm rates.
  • The authors find moderate inter-rater reliability at the response-label level but high reliability for model ranking, supporting WHBench for comparative evaluation while reinforcing the need for expert oversight in clinical deployment.
  • WHBench is positioned as a public, failure-mode-aware benchmark intended to track progress toward safer and more equitable women’s health AI.

Abstract

Large language models are increasingly used for medical guidance, but women's health remains under-evaluated in benchmark design. We present the Women's Health Benchmark (WHBench), a targeted evaluation suite of 47 expert-crafted scenarios across 10 women's health topics, designed to expose clinically meaningful failure modes including outdated guidelines, unsafe omissions, dosing errors, and equity-related blind spots. We evaluate 22 models using a 23-criterion rubric spanning clinical accuracy, completeness, safety, communication quality, instruction following, equity, uncertainty handling, and guideline adherence, with safety-weighted penalties and server-side score recalculation. Across 3,102 attempted responses (3,100 scored), no model mean performance exceeds 75 percent; the best model reaches 72.1 percent. Even top models show low fully correct rates and substantial variation in harm rates. Inter-rater reliability is moderate at the response label level but high for model ranking, supporting WHBench utility for comparative system evaluation while highlighting the need for expert oversight in clinical deployment. WHBench provides a public, failure-mode-aware benchmark to track safer and more equitable progress in womens health AI.