A Systematic Empirical Study of Grokking: Depth, Architecture, Activation, and Regularization

arXiv cs.LG / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents a controlled empirical study to disentangle how architecture, optimization, and regularization affect “grokking” (delayed generalization after memorization) on modular addition (mod 97).
  • It finds grokking is driven less by architecture alone, instead depending critically on interactions between optimization stability and regularization.
  • Depth shows a non-monotonic pattern: depth-4 MLPs consistently fail to grok while depth-8 residual networks regain generalization, implying deeper models need architectural stabilization to grok.
  • Reported Transformer-vs-MLP differences largely vanish when hyperparameters and training are matched, suggesting earlier conclusions were confounded by optimizer/regularization settings.
  • Weight decay emerges as the dominant “control parameter,” with grokking occurring only in a narrow Goldilocks range; activation effects (GELU vs ReLU) are also regime-dependent and only show big advantages when regularization allows memorization.

Abstract

Grokking the delayed transition from memorization to generalization in neural networks remains poorly understood, in part because prior empirical studies confound the roles of architecture, optimization, and regularization. We present a controlled study that systematically disentangles these factors on modular addition (mod 97), with matched and carefully tuned training regimes across models. Our central finding is that grokking dynamics are not primarily determined by architecture, but by interactions between optimization stability and regularization. Specifically, we show: (1) \textbf{depth has a non-monotonic effect}, with depth-4 MLPs consistently failing to grok while depth-8 residual networks recover generalization, demonstrating that depth requires architectural stabilization; (2) \textbf{the apparent gap between Transformers and MLPs largely disappears} (1.11\times delay) under matched hyperparameters, indicating that previously reported differences are largely due to optimizer and regularization confounds; (3) \textbf{activation function effects are regime-dependent}, with GELU up to 4.3\times faster than ReLU only when regularization permits memorization; and (4) \textbf{weight decay is the dominant control parameter}, exhibiting a narrow ``Goldilocks'' regime in which grokking occurs, while too little or too much prevents generalization. Across 3--5 seeds per configuration, these results provide a unified empirical account of grokking as an interaction-driven phenomenon. Our findings challenge architecture-centric interpretations and clarify how optimization and regularization jointly govern delayed generalization.
広告