AI Navigate

OptEMA: Adaptive Exponential Moving Average for Stochastic Optimization with Zero-Noise Optimality

arXiv cs.LG / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The paper introduces OptEMA, an adaptive exponential moving average method designed to improve stochastic optimization, particularly addressing limitations of Adam-style optimizers.
  • OptEMA has two variants, OptEMA-M and OptEMA-V, which differ in how they apply adaptive decay rates to the first and second moments, enabling closed-loop, Lipschitz-free parameterization.
  • The method achieves rigorous convergence guarantees under standard SGD assumptions without relying on boundedness conditions or prior knowledge of Lipschitz constants.
  • OptEMA adapts effectively to noise levels, achieving nearly optimal convergence rates in both noisy and zero-noise regimes, eliminating the need for manual hyperparameter tuning in the deterministic case.
  • This development has implications for optimization performance in machine learning tasks, potentially improving training efficiency and robustness of models using Adam-like optimizers.

Computer Science > Machine Learning

arXiv:2603.09923 (cs)
[Submitted on 10 Mar 2026]

Title:OptEMA: Adaptive Exponential Moving Average for Stochastic Optimization with Zero-Noise Optimality

Authors:Ganzhao Yuan
View a PDF of the paper titled OptEMA: Adaptive Exponential Moving Average for Stochastic Optimization with Zero-Noise Optimality, by Ganzhao Yuan
View PDF
Abstract:The Exponential Moving Average (EMA) is a cornerstone of widely used optimizers such as Adam. However, existing theoretical analyses of Adam-style methods have notable limitations: their guarantees can remain suboptimal in the zero-noise regime, rely on restrictive boundedness conditions (e.g., bounded gradients or objective gaps), use constant or open-loop stepsizes, or require prior knowledge of Lipschitz constants. To overcome these bottlenecks, we introduce OptEMA and analyze two novel variants: OptEMA-M, which applies an adaptive, decreasing EMA coefficient to the first-order moment with a fixed second-order decay, and OptEMA-V, which swaps these roles. Crucially, OptEMA is closed-loop and Lipschitz-free in the sense that its effective stepsizes are trajectory-dependent and do not require the Lipschitz constant for parameterization. Under standard stochastic gradient descent (SGD) assumptions, namely smoothness, a lower-bounded objective, and unbiased gradients with bounded variance, we establish rigorous convergence guarantees. Both variants achieve a noise-adaptive convergence rate of $\widetilde{\mathcal{O}}(T^{-1/2}+\sigma^{1/2} T^{-1/4})$ for the average gradient norm, where $\sigma$ is the noise level. In particular, in the zero-noise regime where $\sigma=0$, our bounds reduce to the nearly optimal deterministic rate $\widetilde{\mathcal{O}}(T^{-1/2})$ without manual hyperparameter retuning.
Subjects: Machine Learning (cs.LG); Numerical Analysis (math.NA); Optimization and Control (math.OC)
Cite as: arXiv:2603.09923 [cs.LG]
  (or arXiv:2603.09923v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09923
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Ganzhao Yuan [view email]
[v1] Tue, 10 Mar 2026 17:19:54 UTC (114 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled OptEMA: Adaptive Exponential Moving Average for Stochastic Optimization with Zero-Noise Optimality, by Ganzhao Yuan
  • View PDF
  • TeX Source
Current browse context:
cs.LG
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.