AI Navigate

On the Robustness of Langevin Dynamics to Score Function Error

arXiv cs.LG / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes how errors in the estimated score function affect score-based generative modeling, showing Langevin dynamics is not robust to L^2 (or L^p) score errors.
  • It contrasts this with diffusion models, which can sample faithfully from the target distribution under small L^2 score errors within a polynomial time horizon.
  • The authors prove that for simple high-dimensional distributions, Langevin dynamics will produce a distribution far from the target in TV distance, even with arbitrarily small estimation errors, if run for any polynomial time.
  • Practically, this motivates preferring diffusion-model approaches over Langevin dynamics when learning scores from data and cautions against using Langevin with estimated scores.

Abstract

We consider the robustness of score-based generative modeling to errors in the estimate of the score function. In particular, we show that Langevin dynamics is not robust to the L^2 errors (more generally L^p errors) in the estimate of the score function. It is well-established that with small L^2 errors in the estimate of the score function, diffusion models can sample faithfully from the target distribution under fairly mild regularity assumptions in a polynomial time horizon. In contrast, our work shows that even for simple distributions in high dimensions, Langevin dynamics run for any polynomial time horizon will produce a distribution far from the target distribution in Total Variation (TV) distance, even when the L^2 error (more generally L^p) of the estimate of the score function is arbitrarily small. Considering such an error in the estimate of the score function is unavoidable in practice when learning the score function from data, our results provide further justification for diffusion models over Langevin dynamics and serve to caution against the use of Langevin dynamics with estimated scores.