Generalization at the Edge of Stability

arXiv cs.LG / 4/22/2026

💬 OpinionModels & Research

Key Points

  • The paper studies why training neural networks with large learning rates “at the edge of stability” can improve generalization, despite optimization dynamics becoming oscillatory or chaotic.
  • It models stochastic optimizers as random dynamical systems, showing they can converge to fractal attractor sets with lower intrinsic dimension rather than single points.
  • Building on Lyapunov dimension ideas, the authors introduce a new metric called “sharpness dimension” and derive a generalization bound tied to this quantity.
  • The bound depends on the full Hessian spectrum and the structure of its partial determinants, indicating that neither trace nor spectral norm alone explains generalization in the chaotic regime.
  • Experiments on multiple MLPs and transformers support the theory and provide additional insight into “grokking,” a recently observed training phenomenon.

Abstract

Training modern neural networks often relies on large learning rates, operating at the edge of stability, where the optimization dynamics exhibit oscillatory and chaotic behavior. Empirically, this regime often yields improved generalization performance, yet the underlying mechanism remains poorly understood. In this work, we represent stochastic optimizers as random dynamical systems, which often converge to a fractal attractor set (rather than a point) with a smaller intrinsic dimension. Building on this connection and inspired by Lyapunov dimension theory, we introduce a novel notion of dimension, coined the `sharpness dimension', and prove a generalization bound based on this dimension. Our results show that generalization in the chaotic regime depends on the complete Hessian spectrum and the structure of its partial determinants, highlighting a complexity that cannot be captured by the trace or spectral norm considered in prior work. Experiments across various MLPs and transformers validate our theory while also providing new insights into the recently observed phenomenon of grokking.