LLM Olympiad: Why Model Evaluation Needs a Sealed Exam

arXiv cs.AI / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The article argues that current LLM benchmarks and leaderboards can be misleading because scores may be driven by benchmark-chasing, undisclosed evaluation choices, or accidental test-set exposure rather than true general capability.
  • It critiques closed benchmarks as a partial fix that can improve reliability while reducing transparency and community learnability from published results.
  • The proposed alternative is an Olympiad-style evaluation format with sealed problems, frozen submissions beforehand, and execution through a single standardized evaluation harness.
  • After results are produced, the full task set and evaluation code should be released to enable reproducibility, auditing, and clearer interpretation of performance.
  • Overall, the method is intended to make high scores harder to “manufacture” while increasing trust in reported evaluation outcomes.

Abstract

Benchmarks and leaderboards are how NLP most often communicates progress, but in the LLM era they are increasingly easy to misread. Scores can reflect benchmark-chasing, hidden evaluation choices, or accidental exposure to test content -- not just broad capability. Closed benchmarks delay some of these issues, but reduce transparency and make it harder for the community to learn from results. We argue for a complementary practice: an Olympiad-style evaluation event where problems are sealed until evaluation, submissions are frozen in advance, and all entries run through one standardized harness. After scoring, the full task set and evaluation code are released so results can be reproduced and audited. This design aims to make strong performance harder to ``manufacture'' and easier to trust.