Medmarks: A Comprehensive Open-Source LLM Benchmark Suite for Medical Tasks

arXiv cs.CL / 5/5/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • Medmarks is a fully open-source LLM benchmark suite for medical tasks that addresses issues like benchmark saturation, restricted data access, and incomplete task coverage by providing 30 benchmarks across multiple medical capabilities.
  • The authors systematically evaluate 61 models over 71 configurations using verifiable metrics and LLM-as-a-Judge, including tasks such as question answering, information extraction, medical calculations, and open-ended clinical reasoning.
  • Results indicate that frontier reasoning models (Gemini 3 Pro Preview, GPT-5.1, and GPT-5.2) achieve the best overall performance, while medically fine-tuned models outperform generalist models.
  • The study finds that many frontier proprietary models are more token efficient than open-weight alternatives, and it documents notable answer-order bias effects, especially for smaller models and Grok 4.
  • A subset of the benchmarks (Medmarks-T) can be used as reinforcement learning environments for post-training LLMs aimed at medical reasoning, with the code released on GitHub.

Abstract

Evaluating large language models (LLMs) for medical applications remains challenging due to benchmark saturation, limited data accessibility, and insufficient coverage of relevant tasks. Existing suites have either saturated, heavily depend on restricted datasets, or lack comprehensive model coverage. We introduce Medmarks, a fully open-source evaluation suite with 30 benchmarks spanning question answering, information extraction, medical calculations, and open-ended clinical reasoning. We perform a systematic evaluation of 61 models across 71 configurations using verifiable metrics and LLM-as-a-Judge. Our results show that frontier reasoning models (Gemini 3 Pro Preview, GPT-5.1, & GPT-5.2) achieve the highest performance across both benchmarks, most frontier proprietary models are significantly more token efficient than open-weight alternatives, medically fine-tuned models outperform their generalist counterparts, and that models are susceptible to answer-order bias (particularly smaller models and Grok 4). A subset of our evals (Medmarks-T) can be directly used as reinforcement learning environments to post-train LLMs for medical reasoning. Code is available at https://github.com/MedARC-AI/Medmarks