Xpertbench: Expert Level Tasks with Rubrics-Based Evaluation

arXiv cs.AI / 4/6/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces XpertBench, a rubrics-based benchmark with 1,346 expert-level tasks across 80 categories intended to better evaluate LLM performance on complex open-ended professional work.
  • XpertBench draws tasks from 1,000+ expert submissions across domains such as finance, healthcare, legal services, education, and dual-track research, aiming for higher ecological validity than conventional benchmarks.
  • Each task is scored using detailed rubrics with mostly 15–40 weighted checkpoints to measure professional rigor and reduce ambiguity in evaluation.
  • The authors propose ShotJudge, an evaluation paradigm that uses calibrated LLM judges with expert few-shot exemplars to mitigate self-rewarding evaluation biases.
  • Experiments show current leading LLMs face an “expert-gap,” with a reported peak success rate of ~66% and mean scores around 55%, along with noticeable domain-specific strengths and weaknesses.

Abstract

As Large Language Models (LLMs) exhibit plateauing performance on conventional benchmarks, a pivotal challenge persists: evaluating their proficiency in complex, open-ended tasks characterizing genuine expert-level cognition. Existing frameworks suffer from narrow domain coverage, reliance on generalist tasks, or self-evaluation biases. To bridge this gap, we present XpertBench, a high-fidelity benchmark engineered to assess LLMs across authentic professional domains. XpertBench consists of 1,346 meticulously curated tasks across 80 categories, spanning finance, healthcare, legal services, education, and dual-track research (STEM and Humanities). These tasks are derived from over 1,000 submissions by domain experts--including researchers from elite institutions and practitioners with extensive clinical or industrial experience--ensuring superior ecological validity. Each task uses detailed rubrics with mostly 15-40 weighted checkpoints to assess professional rigor. To facilitate scalable yet human-aligned assessment, we introduce ShotJudge, a novel evaluation paradigm that employs LLM judges calibrated with expert few-shot exemplars to mitigate self-rewarding biases. Our empirical evaluation of state-of-the-art LLMs reveals a pronounced performance ceiling: even leading models achieve a peak success rate of only ~66%, with a mean score around 55%. Models also exhibit domain-specific divergence, showing non-overlapping strengths in quantitative reasoning versus linguistic synthesis.. These findings underscore a significant "expert-gap" in current AI systems and establish XpertBench as a critical instrument for navigating the transition from general-purpose assistants to specialized professional collaborators.