Scaling Self-Play with Self-Guidance

arXiv cs.LG / 4/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a scaling limitation of existing LLM self-play methods, which tend to hit learning plateaus over long training runs.
  • It argues the plateau occurs because the Conjecturer can “hack” the reward, collapsing into artificially complex problems that fail to improve the Solver.
  • The authors introduce Self-Guided Self-Play (SGS), adding a Guide role that scores synthetic problems for relevance to unsolved targets and for being clean and natural, counteracting degeneration.
  • The core hypothesis is that language models can judge whether a subproblem meaningfully helps achieve the overall goal.
  • Experiments on Lean4 formal theorem proving show SGS improves solve rates, surpassing an RL baseline in fewer than 80 self-play rounds, and allowing a 7B model to outperform a much larger 671B model (pass@4) after 200 rounds.

Abstract

LLM self-play algorithms are notable in that, in principle, nothing bounds their learning: a Conjecturer model creates problems for a Solver, and both improve together. However, in practice, existing LLM self-play methods do not scale well with large amounts of compute, instead hitting learning plateaus. We argue this is because over long training runs, the Conjecturer learns to hack its reward, collapsing to artificially complex problems that do not help the Solver improve. To overcome this, we introduce Self-Guided Self-Play (SGS), a self-play algorithm in which the language model itself guides the Conjecturer away from degeneracy. In SGS, the model takes on three roles: Solver, Conjecturer, and a Guide that scores synthetic problems by their relevance to unsolved target problems and how clean and natural they are, providing supervision against Conjecturer collapse. Our core hypothesis is that language models can assess whether a subproblem is useful for achieving a goal. We evaluate the scaling properties of SGS by running training for significantly longer than prior works and by fitting scaling laws to cumulative solve rate curves. Applying SGS to formal theorem proving in Lean4, we find that it surpasses the asymptotic solve rate of our strongest RL baseline in fewer than 80 rounds of self-play and enables a 7B parameter model, after 200 rounds of self-play, to solve more problems than a 671B parameter model pass@4.