Convergence Rates for Non-Log-Concave Sampling and Log-Partition Estimation

arXiv stat.ML / 4/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies Gibbs-distribution sampling and log-partition (log-normalizer) estimation when the target is non-log-concave, where prior worst-case guarantees suffer from the curse of dimensionality.
  • It investigates whether the “smoothness helps” phenomenon—where convergence exponents improve with the number of available derivatives—can yield similarly fast rates for non-log-concave sampling.
  • Using information-based complexity, the authors characterize optimal convergence rates for both sampling and log-partition computation and show that they can be equal to, or even faster than, rates for related optimization problems.
  • The study evaluates several polynomial-time sampling algorithms (including an extension of a recent optimization method) and finds that, despite sometimes exhibiting interesting behavior, they do not achieve near-optimal rates.
  • The results deepen theoretical connections among sampling, log-partition estimation, and optimization, especially via the analogy that optimization corresponds to a low-temperature limit of Gibbs sampling.

Abstract

Sampling from Gibbs distributions and computing their log-partition function are fundamental tasks in statistics, machine learning, and statistical physics. While efficient algorithms are known for log-concave densities, the worst-case non-log-concave setting necessarily suffers from the curse of dimensionality. For many numerical problems, the curse of dimensionality can be alleviated when the target function is smooth, allowing the exponent in the rate to improve linearly with the number of available derivatives. Recently, it has been shown that similarly fast convergence rates can be achieved by efficient optimization algorithms. Since optimization can be seen as the low-temperature limit of sampling from Gibbs distributions, we pose the question of whether similarly fast convergence rates can be achieved for non-log-concave sampling. We first study the information-based complexity of the sampling and log-partition estimation problems and show that the optimal rates for sampling and log-partition computation are sometimes equal and sometimes faster than for optimization. We then analyze various polynomial-time sampling algorithms, including an extension of a recent promising optimization approach, and find that they sometimes exhibit interesting behavior but no near-optimal rates. Our results also give further insights into the relation between sampling, log-partition, and optimization problems.

Convergence Rates for Non-Log-Concave Sampling and Log-Partition Estimation | AI Navigate