On the Learning Dynamics of Two-layer Linear Networks with Label Noise SGD
arXiv cs.LG / 3/12/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The authors study SGD with label noise on a two-layer over-parameterized linear network to understand its implicit bias and generalization behavior.
- They uncover a two-phase learning dynamic: Phase I where weights shrink and the model escapes the lazy regime, and Phase II where alignment with the ground-truth interpolator increases toward convergence.
- The analysis highlights label noise as a key driver for the transition from lazy to rich regimes and provides a minimal explanation for its empirical effectiveness.
- They extend the insights to Sharpness-Aware Minimization (SAM) and validate the theory with extensive experiments on synthetic and real-world data, with code released.
Related Articles

Astral to Join OpenAI
Dev.to

PearlOS. We gave swarm intelligence a local desktop environment and code control to self-evolve. Has been pretty incredible to see so far. Open source and free if you want your own.
Reddit r/LocalLLaMA

Why Data is Important for LLM
Dev.to

The Inference Market Is Consolidating. Agent Payments Are Still Nobody's Problem.
Dev.to

YouTube's Deepfake Shield for Politicians Changes Evidence Forever
Dev.to