Leveraging Computerized Adaptive Testing for Cost-effective Evaluation of Large Language Models in Medical Benchmarking

arXiv cs.CL / 3/26/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper proposes a computerized adaptive testing (CAT) framework using item response theory (IRT) to evaluate large language models on standardized medical knowledge more efficiently than static benchmarks.
  • A two-phase methodology combines Monte Carlo simulation to tune CAT settings with an empirical study of 38 LLMs assessed via both a full item bank and an adaptive test that stops when reliability reaches a predefined threshold (standard error ≤ 0.3).
  • CAT-based proficiency estimates closely match full-bank results, showing near-perfect correlation (r = 0.988) while requiring only about 1.3% of the items.
  • The approach substantially reduces evaluation time (hours to minutes per model), token usage, and computational cost, while maintaining inter-model performance rankings.
  • The authors position the method as a psychometrically grounded, low-cost benchmarking and monitoring tool rather than a replacement for real-world clinical validation or safety-focused prospective studies.

Abstract

The rapid proliferation of large language models (LLMs) in healthcare creates an urgent need for scalable and psychometrically sound evaluation methods. Conventional static benchmarks are costly to administer repeatedly, vulnerable to data contamination, and lack calibrated measurement properties for fine-grained performance tracking. We propose and validate a computerized adaptive testing (CAT) framework grounded in item response theory (IRT) for efficient assessment of standardized medical knowledge in LLMs. The study comprises a two-phase design: a Monte Carlo simulation to identify optimal CAT configurations and an empirical evaluation of 38 LLMs using a human-calibrated medical item bank. Each model completed both the full item bank and an adaptive test that dynamically selected items based on real-time ability estimates and terminated upon reaching a predefined reliability threshold (standard error <= 0.3). Results show that CAT-derived proficiency estimates achieved a near-perfect correlation with full-bank estimates (r = 0.988) while using only 1.3 percent of the items. Evaluation time was reduced from several hours to minutes per model, with substantial reductions in token usage and computational cost, while preserving inter-model performance rankings. This work establishes a psychometric framework for rapid, low-cost benchmarking of foundational medical knowledge in LLMs. The proposed adaptive methodology is intended as a standardized pre-screening and continuous monitoring tool and is not a substitute for real-world clinical validation or safety-oriented prospective studies.