Evaluation of Large Language Models via Coupled Token Generation

arXiv cs.CL / 3/26/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that evaluating and ranking large language models should control for inherent generation randomness, since repeated runs on the same prompt can yield different outputs.
  • It proposes a causal model for “coupled autoregressive generation” so multiple LLMs can be sampled using the same underlying source of randomness.
  • On benchmark-dataset evaluations, coupled generation produces the same ranking conclusions as standard (vanilla) sampling while requiring provably fewer samples.
  • However, for human pairwise-comparison evaluations, the paper finds coupled vs vanilla sampling can yield different model rankings when comparing more than two models, even with infinite samples, implying current evaluation advantages may be confounded by randomness.
  • Experiments across Llama, Mistral, and Qwen families show up to 75% fewer samples are needed to reach the same benchmark conclusions, and win-rates vs LMSYS Chatbot Arena prompts differ under the two sampling methods.

Abstract

State of the art large language models rely on randomization to respond to a prompt. As an immediate consequence, a model may respond differently to the same prompt if asked multiple times. In this work, we argue that the evaluation and ranking of large language models should control for the randomization underpinning their functioning. Our starting point is the development of a causal model for coupled autoregressive generation, which allows different large language models to sample responses with the same source of randomness. Building upon our causal model, we first show that, on evaluations based on benchmark datasets, coupled autoregressive generation leads to the same conclusions as vanilla autoregressive generation but using provably fewer samples. However, we further show that, on evaluations based on (human) pairwise comparisons, coupled and vanilla autoregressive generation can surprisingly lead to different rankings when comparing more than two models, even with an infinite amount of samples. This suggests that the apparent advantage of a model over others in existing evaluation protocols may not be genuine but rather confounded by the randomness inherent to the generation process. To illustrate and complement our theoretical results, we conduct experiments with several large language models from the Llama, Mistral and Qwen families. We find that, across multiple benchmark datasets, coupled autoregressive generation requires up to 75% fewer samples to reach the same conclusions as vanilla autoregressive generation. Further, we find that the win-rates derived from pairwise comparisons by a strong large language model to prompts from the LMSYS Chatbot Arena platform differ under coupled and vanilla autoregressive generation.

Evaluation of Large Language Models via Coupled Token Generation | AI Navigate