AI Navigate

Why Grokking Takes So Long: A First-Principles Theory of Representational Phase Transitions

arXiv cs.AI / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • A first-principles theory explains grokking as a norm-driven representational phase transition during regularized training, where the model moves from high-norm memorization to a lower-norm generalized representation.
  • The authors derive a scaling law for the grokking delay: T_grok - T_mem = Theta((1 / gamma_eff) * log(||theta_mem||^2 / ||theta_post||^2)), with gamma_eff depending on the optimizer (SGD or AdamW).
  • They validate the theory with 293 training runs across modular addition, modular multiplication, and sparse parity tasks, confirming inverse scaling with weight decay and learning rate, and logarithmic dependence on the norm ratio (R^2 > 0.97).
  • The results show that the optimizer must decouple memorization from contraction; SGD can fail to grok under hyperparameters where AdamW reliably groks.
  • The work provides the first quantitative scaling law for grokking delay and frames grokking as a predictable consequence of norm separation between competing interpolating representations.

Abstract

Grokking is the sudden generalization that appears long after a model has perfectly memorized its training data. Although this phenomenon has been widely observed, there is still no quantitative theory explaining the length of the delay between memorization and generalization. Prior work has noted that weight decay plays an important role, but no result derives tight bounds for the delay or explains its scaling behavior. We present a first-principles theory showing that grokking arises from a norm-driven representational phase transition in regularized training dynamics. Training first converges to a high-norm memorization solution and only later contracts toward a lower-norm structured representation that generalizes. Our main result establishes a scaling law for the delay: T_grok - T_mem = Theta((1 / gamma_eff) * log(||theta_mem||^2 / ||theta_post||^2)), where gamma_eff is the effective contraction rate of the optimizer (gamma_eff = eta * lambda for SGD and gamma_eff >= eta * lambda for AdamW). The upper bound follows from a discrete Lyapunov contraction argument, and the matching lower bound arises from dynamical constraints of regularized first-order optimization. Across 293 training runs spanning modular addition, modular multiplication, and sparse parity tasks, we confirm three predictions: inverse scaling with weight decay, inverse scaling with learning rate, and logarithmic dependence on the norm ratio (R^2 > 0.97). We further find that grokking requires an optimizer that can decouple memorization from contraction: SGD fails under hyperparameters where AdamW reliably groks. These results show that grokking is a predictable consequence of norm separation between competing interpolating representations and provide the first quantitative scaling law for the delay of grokking.