AI Navigate

Probabilistic Gaussian Homotopy: A Probability-Space Continuation Framework for Nonconvex Optimization

arXiv cs.LG / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Probabilistic Gaussian Homotopy (PGH) is introduced as a probability-space continuation framework for nonconvex optimization, using Boltzmann-weighted aggregation of perturbed gradients to bias descent toward low-energy regions.
  • PGH corresponds to a log-sum-exp (soft-min) homotopy that smooths the objective at scale λ>0 and recovers the original objective as λ→0, yielding a posterior-mean generalization of the Moreau envelope.
  • The authors derive a dynamical system that governs minimizer evolution along an annealed homotopy path.
  • The work establishes principled connections between Gaussian continuation, Bayesian denoising, and diffusion-style smoothing.
  • They also propose Probabilistic Gaussian Homotopy Optimization (PGHO), a practical stochastic algorithm based on Monte Carlo gradient estimation, which demonstrates strong performance on high-dimensional nonconvex benchmarks and sparse recovery problems where classical gradient methods and objective-space smoothing frequently fail.

Abstract

We introduce Probabilistic Gaussian Homotopy (PGH), a probability-space continuation framework for nonconvex optimization. Unlike classical Gaussian homotopy, which smooths the objective and uniformly averages gradients, PGH deforms the associated Boltzmann distribution and induces Boltzmann-weighted aggregation of perturbed gradients, which exponentially biases descent directions toward low-energy regions. We show that PGH corresponds to a log-sum-exp (soft-min) homotopy that smooths a nonconvex objective at scale \lambda>0 and recovers the original objective as \lambda\to 0, yielding a posterior-mean generalization of the Moreau envelope, and we derive a dynamical system governing minimizer evolution along an annealed homotopy path. This establishes a principled connection between Gaussian continuation, Bayesian denoising, and diffusion-style smoothing. We further propose Probabilistic Gaussian Homotopy Optimization (PGHO), a practical stochastic algorithm based on Monte Carlo gradient estimation, and demonstrate strong performance on high-dimensional nonconvex benchmarks and sparse recovery problems where classical gradient methods and objective-space smoothing frequently fail.