AI Navigate

QuarkMedBench: A Real-World Scenario Driven Benchmark for Evaluating Large Language Models

arXiv cs.CL / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • QuarkMedBench is introduced as a real-world scenario driven benchmark for evaluating LLMs in medicine, addressing the gap between standardized exam performance and real-world medical queries.
  • The benchmark comprises a dataset with 20,821 single-turn queries and 3,853 multi-turn sessions across Clinical Care, Wellness Health, and Professional Inquiry, plus an automated scoring framework that generates 220,617 fine-grained rubrics (~9.8 per query) through multi-model consensus and evidence-based retrieval.
  • The scoring framework uses hierarchical weighting and safety constraints to quantify medical accuracy, key-point coverage, and risk interception, aiming to reduce the cost and subjectivity of human grading.
  • Experiments report 91.8% concordance with clinical expert audits and reveal notable performance gaps among state-of-the-art models on real-world clinical nuances, underscoring the limitations of exam-based metrics.

Abstract

While Large Language Models (LLMs) excel on standardized medical exams, high scores often fail to translate to high-quality responses for real-world medical queries. Current evaluations rely heavily on multiple-choice questions, failing to capture the unstructured, ambiguous, and long-tail complexities inherent in genuine user inquiries. To bridge this gap, we introduce QuarkMedBench, an ecologically valid benchmark tailored for real-world medical LLM assessment. We compiled a massive dataset spanning Clinical Care, Wellness Health, and Professional Inquiry, comprising 20,821 single-turn queries and 3,853 multi-turn sessions. To objectively evaluate open-ended answers, we propose an automated scoring framework that integrates multi-model consensus with evidence-based retrieval to dynamically generate 220,617 fine-grained scoring rubrics (~9.8 per query). During evaluation, hierarchical weighting and safety constraints structurally quantify medical accuracy, key-point coverage, and risk interception, effectively mitigating the high costs and subjectivity of human grading. Experimental results demonstrate that the generated rubrics achieve a 91.8% concordance rate with clinical expert blind audits, establishing highly dependable medical reliability. Crucially, baseline evaluations on this benchmark reveal significant performance disparities among state-of-the-art models when navigating real-world clinical nuances, highlighting the limitations of conventional exam-based metrics. Ultimately, QuarkMedBench establishes a rigorous, reproducible yardstick for measuring LLM performance on complex health issues, while its framework inherently supports dynamic knowledge updates to prevent benchmark obsolescence.