On Benchmark Hacking in ML Contests: Modeling, Insights and Design

arXiv cs.LG / 4/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper defines benchmark hacking as optimizing a model to score well on evaluation metrics while failing to improve genuine generalization or correctly solving the intended task.
  • It models ML contests as a game where contestants split effort between creative work that increases intended capability and mechanistic work that overfits to the contest setting.
  • The authors prove there exists a symmetric monotone pure-strategy equilibrium and use it to formalize benchmark hacking by comparing players’ equilibrium effort allocations to a single-agent baseline.
  • Their results predict a threshold effect: contestants with “low” types always engage in benchmark hacking, while those with “high” types avoid it.
  • The paper also argues that more skewed reward structures (rewarding top ranks more) can produce more desirable contest outcomes, supported by empirical evidence.

Abstract

Benchmark hacking refers to tuning a machine learning model to score highly on certain evaluation criteria without improving true generalization or faithfully solving the intended problem. We study this phenomenon in a generic machine learning contest, where each contestant chooses two types of effort: creative effort that improves model capability as desired by the contest host, and mechanistic effort that only improves the model's fitness to the particular task in contest without contributing to true generalization. We establish the existence of a symmetric monotone pure strategy equilibrium in this competition game. It also provides a natural definition of benchmark hacking in this strategic context by comparing a player's equilibrium effort allocation to that of a single-agent baseline scenario. Under our definition, contestants with types below certain threshold (low types) always engage in benchmark hacking, whereas those above the threshold do not. Furthermore, we show that more skewed reward structures (favoring top-ranked contestants) can elicit more desirable contest outcomes. We also provide empirical evidence to support our theoretical predictions.