Evaluation of Large Language Models via Coupled Token Generation
arXiv cs.CL / 3/26/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that evaluating and ranking large language models should control for inherent generation randomness, since repeated runs on the same prompt can yield different outputs.
- It proposes a causal model for “coupled autoregressive generation” so multiple LLMs can be sampled using the same underlying source of randomness.
- On benchmark-dataset evaluations, coupled generation produces the same ranking conclusions as standard (vanilla) sampling while requiring provably fewer samples.
- However, for human pairwise-comparison evaluations, the paper finds coupled vs vanilla sampling can yield different model rankings when comparing more than two models, even with infinite samples, implying current evaluation advantages may be confounded by randomness.
- Experiments across Llama, Mistral, and Qwen families show up to 75% fewer samples are needed to reach the same benchmark conclusions, and win-rates vs LMSYS Chatbot Arena prompts differ under the two sampling methods.
Related Articles
Regulating Prompt Markets: Securities Law, Intellectual Property, and the Trading of Prompt Assets
Dev.to
Mercor competitor Deccan AI raises $25M, sources experts from India
Dev.to
How We Got Local MCP Servers Working in Claude Cowork (The Missing Guide)
Dev.to
How Should Students Document AI Usage in Academic Work?
Dev.to
They Did Not Accidentally Make Work the Answer to Who You Are
Dev.to