Heavy-Tailed and Long-Range Dependent Noise in Stochastic Approximation: A Finite-Time Analysis

arXiv cs.LG / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies stochastic approximation under heavy-tailed and long-range dependent noise, extending beyond classical martingale difference or bounded-variance assumptions.
  • It provides finite-time moment bounds and explicit convergence rates that quantify the impact of heavy tails and temporal dependence on SA.
  • The authors introduce a noise-averaging argument that regularizes noise without modifying the iteration, with applications to SGD and gradient play.
  • Numerical experiments corroborate the theory and illustrate practical implications for RL and optimization settings.

Abstract

Stochastic approximation (SA) is a fundamental iterative framework with broad applications in reinforcement learning and optimization. Classical analyses typically rely on martingale difference or Markov noise with bounded second moments, but many practical settings, including finance and communications, frequently encounter heavy-tailed and long-range dependent (LRD) noise. In this work, we study SA for finding the root of a strongly monotone operator under these non-classical noise models. We establish the first finite-time moment bounds in both settings, providing explicit convergence rates that quantify the impact of heavy tails and temporal dependence. Our analysis employs a noise-averaging argument that regularizes the impact of noise without modifying the iteration. Finally, we apply our general framework to stochastic gradient descent (SGD) and gradient play, and corroborate our finite-time analysis through numerical experiments.