Who Defines "Best"? Towards Interactive, User-Defined Evaluation of LLM Leaderboards

arXiv cs.AI / 4/25/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • LLM leaderboards are commonly used to compare models, but their rankings largely reflect benchmark designers’ priorities rather than the real goals and constraints of users and organizations.
  • The paper’s analysis of the LMArena (formerly Chatbot Arena) dataset finds biases toward certain topics, ranking changes depending on prompt “slices,” and evaluation preferences that can blur their intended scope.
  • The authors propose an interactive visualization that lets users select and weight prompt slices to define their own evaluation priorities and see how model rankings shift under those choices.
  • A qualitative study indicates the interactive approach increases transparency and enables more context-specific evaluation of LLMs, suggesting better ways to design and use leaderboards.

Abstract

LLM leaderboards are widely used to compare models and guide deployment decisions. However, leaderboard rankings are shaped by evaluation priorities set by benchmark designers, rather than by the diverse goals and constraints of actual users and organizations. A single aggregate score often obscures how models behave across different prompt types and compositions. In this work, we conduct an in-depth analysis of the dataset used in the LMArena (formerly Chatbot Arena) benchmark and investigate this evaluation challenge by designing an interactive visualization interface as a design probe. Our analysis reveals that the dataset is heavily skewed toward certain topics, that model rankings vary across prompt slices, and that preference-based judgments are used in ways that blur their intended scope. Building on this analysis, we introduce a visualization interface that allows users to define their own evaluation priorities by selecting and weighting prompt slices and to explore how rankings change accordingly. A qualitative study suggests that this interactive approach improves transparency and supports more context-specific model evaluation, pointing toward alternative ways to design and use LLM leaderboards.