Revisiting Tree Search for LLMs: Gumbel and Sequential Halving for Budget-Scalable Reasoning
arXiv cs.AI / 3/24/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper reports that AlphaZero-style neural tree search for LLM inference can fail to scale: accuracy drops on GSM8K and Game24 as the search budget increases.
- It introduces ReSCALE, an adaptation of Gumbel AlphaZero MCTS that swaps Dirichlet noise and PUCT selection for Gumbel sampling and adds Sequential Halving to improve budget efficiency.
- The authors claim ReSCALE restores monotonic scaling behavior without changing the underlying LLM model or its training process.
- Reported results include 58.4% on GSM8K and 85.3% on Game24, achieved at budgets where the baseline approach degrades.
- Ablation experiments indicate that Sequential Halving is the main contributor to the performance gains.
Related Articles
AgentDesk vs Hiring Another Consultant: A Cost Comparison
Dev.to
"Why Your AI Agent Needs a System 1"
Dev.to
When should we expect TurboQuant?
Reddit r/LocalLLaMA
AI as Your Customs Co-Pilot: Automating HS Code Chaos in Southeast Asia
Dev.to
The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions
Dev.to