Latent Algorithmic Structure Precedes Grokking: A Mechanistic Study of ReLU MLPs on Modular Arithmetic

arXiv cs.LG / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies grokking on modular addition and shows that, for ReLU MLPs, learned input weight patterns are closer to near-binary square waves rather than the previously reported sinusoidal weight distributions.
  • It reports that intermediate-valued input weights occur only near sign-change boundaries, indicating a structured binarization process during training.
  • Using DFT-based analysis, the authors find output weight Fourier phases follow a phase-sum relation (\u03c6_out = \u03c6_a + \u03c6_b) that holds even when models are trained on noisy data and do not grok.
  • The authors construct an idealized MLP by replacing learned weights with square waves (input) and cosine components (output) parameterized by the dominant Fourier frequencies/phases extracted from the real model.
  • The idealized MLP attains high modular-addition accuracy (95.5%) even when it is parameterized from a noisy-data-trained model that itself generalizes poorly (0.23%), suggesting grokking reflects sharpening of an already-encoded algorithm rather than discovering it from scratch.

Abstract

Grokking-the phenomenon where validation accuracy of neural networks on modular addition of two integers rises long after training data has been memorized-has been characterized in previous works as producing sinusoidal input weight distributions in transformers and multi-layer perceptrons (MLPs). We find empirically that ReLU MLPs in our experimental setting instead learn near-binary square wave input weights, where intermediate-valued weights appear exclusively near sign-change boundaries, alongside output weight distributions whose dominant Fourier phases satisfy a phase-sum relation \phi_{\mathrm{out}} = \phi_a + \phi_b; this relation holds even when the model is trained on noisy data and fails to grok. We extract the frequency and phase of each neuron's weights via DFT and construct an idealized MLP: Input weights are replaced by perfect binary square waves and output weights by cosines, both parametrized by the frequencies, phases, and amplitudes extracted from the dominant Fourier components of the real model weights. This idealized model achieves 95.5% accuracy when the frequencies and phases are extracted from the weights of a model trained on noisy data that itself achieves only 0.23% accuracy. This suggests that grokking does not discover the correct algorithm, but rather sharpens an algorithm substantially encoded during memorization, progressively binarizing the input weights into cleaner square waves and aligning the output weights, until generalization becomes possible.