Personalized Benchmarking: Evaluating LLMs by Individual Preferences

arXiv cs.AI / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that existing LLM benchmarks are flawed because they average preferences across users, masking how different individuals rank models in different contexts.
  • It proposes personalized LLM benchmarking by computing user-specific model rankings using ELO ratings and Bradley–Terry coefficients based on 115 active Chatbot Arena users.
  • The authors find that individual user rankings can diverge sharply from aggregate rankings, with very low Bradley–Terry correlation on average (ρ = 0.04) and ELO correlations only moderately higher (ρ = 0.43).
  • Analysis shows substantial heterogeneity in users’ topic interests and writing/communication styles, which materially influence their preferred LLMs.
  • The study also demonstrates that a compact feature space combining topic and style can help predict user-specific model rankings, supporting the case for personalized benchmarks.

Abstract

With the rise in capabilities of large language models (LLMs) and their deployment in real-world tasks, evaluating LLM alignment with human preferences has become an important challenge. Current benchmarks average preferences across all users to compute aggregate ratings, overlooking individual user preferences when establishing model rankings. Since users have varying preferences in different contexts, we call for personalized LLM benchmarks that rank models according to individual needs. We compute personalized model rankings using ELO ratings and Bradley-Terry coefficients for 115 active Chatbot Arena users and analyze how user query characteristics (topics and writing style) relate to LLM ranking variations. We demonstrate that individual rankings of LLM models diverge dramatically from aggregate LLM rankings, with Bradley-Terry correlations averaging only \rho = 0.04 (57\% of users show near-zero or negative correlation) and ELO ratings showing moderate correlation (\rho = 0.43). Through topic modeling and style analysis, we find users exhibit substantial heterogeneity in topical interests and communication styles, influencing their model preferences. We further show that a compact combination of topic and style features provides a useful feature space for predicting user-specific model rankings. Our results provide strong quantitative evidence that aggregate benchmarks fail to capture individual preferences for most users, and highlight the importance of developing personalized benchmarks that rank LLM models according to individual user preferences.