AI Navigate

On the Learning Dynamics of Two-layer Linear Networks with Label Noise SGD

arXiv cs.LG / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The authors study SGD with label noise on a two-layer over-parameterized linear network to understand its implicit bias and generalization behavior.
  • They uncover a two-phase learning dynamic: Phase I where weights shrink and the model escapes the lazy regime, and Phase II where alignment with the ground-truth interpolator increases toward convergence.
  • The analysis highlights label noise as a key driver for the transition from lazy to rich regimes and provides a minimal explanation for its empirical effectiveness.
  • They extend the insights to Sharpness-Aware Minimization (SAM) and validate the theory with extensive experiments on synthetic and real-world data, with code released.

Abstract

One crucial factor behind the success of deep learning lies in the implicit bias induced by noise inherent in gradient-based training algorithms. Motivated by empirical observations that training with noisy labels improves model generalization, we delve into the underlying mechanisms behind stochastic gradient descent (SGD) with label noise. Focusing on a two-layer over-parameterized linear network, we analyze the learning dynamics of label noise SGD, unveiling a two-phase learning behavior. In \emph{Phase I}, the magnitudes of model weights progressively diminish, and the model escapes the lazy regime; enters the rich regime. In \emph{Phase II}, the alignment between model weights and the ground-truth interpolator increases, and the model eventually converges. Our analysis highlights the critical role of label noise in driving the transition from the lazy to the rich regime and minimally explains its empirical success. Furthermore, we extend these insights to Sharpness-Aware Minimization (SAM), showing that the principles governing label noise SGD also apply to broader optimization algorithms. Extensive experiments, conducted under both synthetic and real-world setups, strongly support our theory. Our code is released at https://github.com/a-usually/Label-Noise-SGD.