AI Navigate

Towards Understanding Adam Convergence on Highly Degenerate Polynomials

arXiv cs.LG / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • This paper explores the natural auto-convergence properties of the Adam optimizer on a specialized class of highly degenerate polynomial objective functions without relying on external schedulers or $\beta_2$ near 1.
  • The authors derive theoretical conditions for local asymptotic stability and demonstrate that Adam achieves local linear convergence on these degenerate polynomials, vastly outperforming the sub-linear convergence rates of Gradient Descent and Momentum.
  • The acceleration in Adam's convergence is attributed to the decoupling between the second moment estimate and the squared gradient, which amplifies the effective learning rate exponentially.
  • The study also characterizes the Adam hyperparameter phase diagram, identifying three distinct behavioral regimes: stable convergence, spikes, and SignGD-like oscillations, providing insights for better tuning of Adam parameters.
  • Experimental results strongly align with the theoretical findings, validating the proposed analytical framework on Adam's behavior in this context.

Computer Science > Machine Learning

arXiv:2603.09581 (cs)
[Submitted on 10 Mar 2026]

Title:Towards Understanding Adam Convergence on Highly Degenerate Polynomials

View a PDF of the paper titled Towards Understanding Adam Convergence on Highly Degenerate Polynomials, by Zhiwei Bai and 4 other authors
View PDF HTML (experimental)
Abstract:Adam is a widely used optimization algorithm in deep learning, yet the specific class of objective functions where it exhibits inherent advantages remains underexplored. Unlike prior studies requiring external schedulers and $\beta_2$ near 1 for convergence, this work investigates the "natural" auto-convergence properties of Adam. We identify a class of highly degenerate polynomials where Adam converges automatically without additional schedulers. Specifically, we derive theoretical conditions for local asymptotic stability on degenerate polynomials and demonstrate strong alignment between theoretical bounds and experimental results. We prove that Adam achieves local linear convergence on these degenerate functions, significantly outperforming the sub-linear convergence of Gradient Descent and Momentum. This acceleration stems from a decoupling mechanism between the second moment $v_t$ and squared gradient $g_t^2$, which exponentially amplifies the effective learning rate. Finally, we characterize Adam's hyperparameter phase diagram, identifying three distinct behavioral regimes: stable convergence, spikes, and SignGD-like oscillation.
Subjects: Machine Learning (cs.LG)
Cite as: arXiv:2603.09581 [cs.LG]
  (or arXiv:2603.09581v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09581
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Zhiwei Bai [view email]
[v1] Tue, 10 Mar 2026 12:30:20 UTC (6,902 KB)
Full-text links:

Access Paper:

Current browse context:
cs.LG
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.