Power-SMC: Low-Latency Sequence-Level Power Sampling for Training-Free LLM Reasoning

arXiv stat.ML / 2026/3/24

💬 オピニオンIdeas & Deep AnalysisModels & Research

要点

  • The paper frames recent LLM reasoning improvements as “distribution sharpening” using a sequence-level power distribution \(\pi_\alpha(y\mid x)\propto p_\theta(y\mid x)^\alpha\) rather than changing token-level temperature.

Abstract

Many recent reasoning gains in large language models can be explained as distribution sharpening: biasing generation toward high-likelihood trajectories already supported by the pretrained model, rather than modifying its weights. A natural formalization is the sequence-level power distribution \pi_\alpha(y\mid x)\propto p_\theta(y\mid x)^\alpha (\alpha>1), which concentrates mass on whole sequences instead of adjusting token-level temperature. Prior work shows that Metropolis--Hastings (MH) sampling from this distribution recovers strong reasoning performance, but at order-of-magnitude inference slowdowns. We introduce Power-SMC, a training-free Sequential Monte Carlo scheme that targets the same objective while remaining close to standard decoding latency. Power-SMC advances a small particle set in parallel, corrects importance weights token-by-token, and resamples when necessary, all within a single GPU-friendly batched decode. We prove that temperature \tau=1/\alpha is the unique prefix-only proposal minimizing incremental weight variance, interpret residual instability via prefix-conditioned R\'enyi entropies, and introduce an exponent-bridging schedule that improves particle stability without altering the target. On MATH500, Power-SMC matches or exceeds MH power sampling while reducing latency from 16--28\times to 1.4--3.3\times over baseline decoding. The code is available at https://github.com/ArminAzizi98/Power-SMC.