AI Navigate

The Validity Gap in Health AI Evaluation: A Cross-Sectional Analysis of Benchmark Composition

arXiv cs.AI / 3/20/2026

📰 NewsSignals & Early TrendsIdeas & Deep Analysis

Key Points

  • The paper identifies a structural validity gap in health AI benchmarks due to missing transparent composition of patient populations and queries, which can lead to misleading performance generalizations for clinical use.
  • It analyzes 18,707 consumer health queries across six public benchmarks using LLMs to apply a standardized 16-field taxonomy that profiles context, topic, and intent.
  • Findings show benchmarks are skewed toward wellness-related data and lack complex diagnostic inputs, safety-critical scenarios, and representation of vulnerable populations, with low presence of laboratory values, imaging, raw medical records, and chronic care contexts.
  • The authors call for standardized query profiling—akin to clinical trial reporting—to align benchmarks with real-world clinical practice, including raw clinical artifacts, diverse populations, and longitudinal care scenarios.

Abstract

Background: Clinical trials rely on transparent inclusion criteria to ensure generalizability. In contrast, benchmarks validating health-related large language models (LLMs) rarely characterize the "patient" or "query" populations they contain. Without defined composition, aggregate performance metrics may misrepresent model readiness for clinical use. Methods: We analyzed 18,707 consumer health queries across six public benchmarks using LLMs as automated coding instruments to apply a standardized 16-field taxonomy profiling context, topic, and intent. Results: We identified a structural "validity gap." While benchmarks have evolved from static retrieval to interactive dialogue, clinical composition remains misaligned with real-world needs. Although 42% of the corpus referenced objective data, this was polarized toward wellness-focused wearable signals (17.7%); complex diagnostic inputs remained rare, including laboratory values (5.2%), imaging (3.8%), and raw medical records (0.6%). Safety-critical scenarios were effectively absent: suicide/self-harm queries comprised <0.7% of the corpus and chronic disease management only 5.5%. Benchmarks also neglected vulnerable populations (pediatrics/older adults <11%) and global health needs. Conclusions: Evaluation benchmarks remain misaligned with real-world clinical needs, lacking raw clinical artifacts, adequate representation of vulnerable populations, and longitudinal chronic care scenarios. The field must adopt standardized query profiling--analogous to clinical trial reporting--to align evaluation with the full complexity of clinical practice.