Revisiting Tree Search for LLMs: Gumbel and Sequential Halving for Budget-Scalable Reasoning

arXiv cs.AI / 2026/3/24

💬 オピニオンIdeas & Deep AnalysisModels & Research

要点

  • The paper reports that AlphaZero-style neural tree search for LLM inference can fail to scale: accuracy drops on GSM8K and Game24 as the search budget increases.
  • It introduces ReSCALE, an adaptation of Gumbel AlphaZero MCTS that swaps Dirichlet noise and PUCT selection for Gumbel sampling and adds Sequential Halving to improve budget efficiency.
  • The authors claim ReSCALE restores monotonic scaling behavior without changing the underlying LLM model or its training process.
  • Reported results include 58.4% on GSM8K and 85.3% on Game24, achieved at budgets where the baseline approach degrades.
  • Ablation experiments indicate that Sequential Halving is the main contributor to the performance gains.

Abstract

Neural tree search is a powerful decision-making algorithm widely used in complex domains such as game playing and model-based reinforcement learning. Recent work has applied AlphaZero-style tree search to enhance the reasoning capabilities of Large Language Models (LLMs) during inference, but we find that this approach suffers from a scaling failure: on GSM8K and Game24, accuracy drops as the search budget increases. In this paper, we present ReSCALE, an adaptation of Gumbel AlphaZero MCTS that replaces Dirichlet noise and PUCT selection with Gumbel sampling and Sequential Halving, restoring monotonic scaling without changes to the model or its training. ReSCALE reaches 58.4\% on GSM8K and 85.3\% on Game24 at budgets where the baseline degrades. Ablations confirm that Sequential Halving is the primary driver of the improvement.