Qworld: Question-Specific Evaluation Criteria for LLMs

arXiv cs.CL / 3/26/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that evaluating LLM answers to open-ended questions requires context-dependent criteria, since simple binary scoring or static rubrics cannot capture question-specific requirements.
  • It introduces Qworld (One-Question-One-World), which generates question-specific evaluation criteria via a recursive expansion tree that decomposes questions into scenarios, perspectives, and fine-grained binary criteria.
  • On HealthBench, Qworld is reported to cover 89% of expert-authored criteria, while generating 79% novel criteria that human experts validate, with higher insight and granularity than prior methods.
  • Applying Qworld to 11 frontier LLMs across HealthBench and Humanity’s Last Exam shows that coarse rubrics miss capability differences, such as long-term impact, equity, error handling, and interdisciplinary reasoning.
  • The core contribution is framing criteria generation as structured coverage of evaluation axes implied by each question, enabling adaptive evaluation rather than fixed task-level rubrics.

Abstract

Evaluating large language models (LLMs) on open-ended questions is difficult because response quality depends on the question's context. Binary scores and static rubrics fail to capture these context-dependent requirements. Existing methods define criteria at the dataset level or generate them in a single pass, which limits their ability to explore the evaluation space implied by each question. We introduce One-Question-One-World (Qworld), a method that generates question-specific evaluation criteria using a recursive expansion tree. Given a question, Qworld decomposes it into scenarios, perspectives, and fine-grained binary criteria through structured hierarchical and horizontal expansion. The resulting criteria specify what a high-quality answer must address for that question. On HealthBench, Qworld covers 89% of expert-authored criteria and generates 79% novel criteria validated by human experts. Experts rate Qworld criteria higher in insight and granularity than those produced by prior methods. When applied to 11 frontier LLMs on HealthBench and Humanity's Last Exam, Qworld reveals capability differences in dimensions such as long-term impact, equity, error handling, and interdisciplinary reasoning that coarse rubrics do not distinguish. By formulating criteria generation as structured coverage of question-implied evaluation axes, Qworld enables evaluation that adapts to each question rather than relying on fixed task-level criteria.