Mitigating Premature Exploitation in Particle-based Monte Carlo for Inference-Time Scaling

arXiv stat.ML / 3/31/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies why Particle Filtering (PF) used for Inference-Time Scaling can fail via premature exploitation when reward models are overconfident early, leading to particle impoverishment and suboptimal convergence under tight compute budgets.
  • It identifies two root causes—loss of particle-set diversity from overconfident resampling and resulting inability to properly evaluate the future potential of reasoning paths.
  • The proposed Entropic Particle Filtering (ePF) addresses this with Entropic Annealing (EA), which monitors search diversity via entropy and dynamically anneals the resampling distribution to preserve exploration.
  • ePF further improves decision quality using Look-ahead Modulation (LaM), which adds a predictive guide to estimate a state’s potential from its successors.
  • Experiments on difficult math benchmarks show ePF delivers strong gains, including up to ~50% relative improvement in task reward over competitive baselines.

Abstract

Inference-Time Scaling (ITS) improves language models by allocating more computation at generation time. Particle Filtering (PF) has emerged as a strong ITS method for complex mathematical reasoning tasks, but it is vulnerable when guided by process reward models, which often assign overconfident scores early in the reasoning process. This causes PF to suffer from premature exploitation: it myopically commits to locally promising trajectories, prunes potentially correct hypotheses, and converges to suboptimal solutions. This failure mode, known as particle impoverishment, is especially severe under constrained computational budgets. To address this, we analyze the problem and identify two root causes: a lack of diversity in the particle set due to overconfident resampling and consequent inability to assess the potential of a reasoning path. We introduce Entropic Particle Filtering (ePF), an algorithm that integrates two new techniques to solve these issues. The first technique, Entropic Annealing (EA), directly mitigates particle impoverishment by monitoring search diversity via entropy; when diversity drops, it intervenes by dynamically annealing the resampling distribution to preserve exploration. The second, an enhancement called Look-ahead Modulation (LaM), adds a predictive guide to evaluate a state's potential based on its successors. On several challenging math benchmarks, ePF significantly outperforms strong baselines and achieves up to a 50% relative improvement in task reward. Together, these methods improve PF's resilience by balancing the exploration of diverse solution spaces with the exploitation of high-reward regions, ultimately leading to higher-quality solutions.