When Langevin Monte Carlo Meets Randomization: Non-asymptotic Error Bounds beyond Log-Concavity and Gradient Lipschitzness

arXiv stat.ML / 4/22/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper revisits randomized Langevin Monte Carlo (RLMC) to sample high-dimensional target distributions even when the usual log-concavity assumption does not hold.
  • Assuming gradient Lipschitzness and a log-Sobolev inequality, it derives a uniform-in-time non-asymptotic error bound in Wasserstein-2 distance of order O(√d · h) for RLMC.
  • This bound is shown to match the best known results in the literature that rely on log-concavity.
  • For potentials whose gradients are not globally Lipschitz and can grow superlinearly, the authors propose modified RLMC variants.
  • They establish new non-asymptotic error bounds for these modified methods, claiming novelty in the non-globally Lipschitz setting.

Abstract

Efficient sampling from complex and high dimensional target distributions turns out to be a fundamental task in diverse disciplines such as scientific computing, statistics and machine learning. In this paper, we revisit the randomized Langevin Monte Carlo (RLMC) for sampling from high dimensional distributions without log-concavity. Under the gradient Lipschitz condition and the log-Sobolev inequality, we prove a uniform-in-time error bound in \mathcal{W}_2-distance of order O(\sqrt{d}h) for the RLMC sampling algorithm, which matches the best one in the literature under the log-concavity condition. Moreover, when the gradient of the potential U is non-globally Lipschitz with superlinear growth, modified RLMC algorithms are proposed and analyzed, with non-asymptotic error bounds established. To the best of our knowledge, the modified RLMC algorithms and their non-asymptotic error bounds are new in the non-globally Lipschitz setting.