Tempered Guided Diffusion

arXiv stat.ML / 5/6/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Tempered Guided Diffusion (TGD) is a training-free conditional diffusion sampler that reduces wasted computation caused by widely varying guided trajectories and insufficient recovery from early missteps.
  • TGD formulates sampling as an annealed sequential Monte Carlo (SMC) process that uses noisy diffusion states only as auxiliary variables, reweighting particles via incremental likelihood ratios and resampling across noise levels.
  • The method targets tempered posteriors over the clean signal, concentrating compute on trajectories that are simultaneously plausible under the diffusion prior and the given observation.
  • Under idealized exact-reconstruction assumptions, TGD provides a consistent particle approximation to the posterior as the number of particles increases.
  • For expensive reconstruction settings, Accelerated TGD (A-TGD) prunes particles partway through sampling to keep only a single high-likelihood trajectory, improving wall-clock speed–quality tradeoffs in experiments.

Abstract

Training-free conditional diffusion provides a flexible alternative to task-specific conditional model training, but existing samplers often allocate computation inefficiently: independent guided trajectories can vary widely in quality, and additional function evaluations along a single trajectory may not recover from poor early decisions. We propose Tempered Guided Diffusion (TGD), an annealed sequential Monte Carlo framework for training-free conditional sampling with diffusion priors. TGD targets tempered posterior distributions over the clean signal, using noisy diffusion states only as auxiliary variables for proposing reconstructions and propagating particles. Particles are reweighted by incremental likelihood ratios, resampled, and propagated across noise levels, concentrating computation on trajectories plausible under both the prior and observation. Under idealized exact-reconstruction assumptions, full TGD yields a consistent particle approximation to the posterior as the number of particles grows. For expensive reconstruction tasks, Accelerated TGD (A-TGD) retains early particle exploration but prunes to a single high-likelihood trajectory partway through sampling. Experiments on a controlled two-dimensional inverse problem and image inverse problems show improved posterior approximation and favorable wall-clock speed-quality tradeoffs over independent multi-trajectory baselines.