A Theory of Generalization in Deep Learning

arXiv cs.LG / 5/5/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a non-asymptotic generalization theory for deep learning based on the empirical neural tangent kernel, which separates signal-relevant behavior from noise-related components in the output space.
  • It argues that eigen-structure of the kernel enables rapid error dissipation along signal directions, while near-zero eigenvalues in orthogonal noise subspaces can trap residual error in a test-invisible reservoir.
  • The theory claims minibatch SGD accumulates coherent “population signal” via fast linear drift while pushing idiosyncratic memorization into a slower, diffusive process, and it still guarantees generalization even under feature learning when the kernel changes by O(1) in operator norm.
  • The authors show the framework can explain multiple known deep-learning phenomena (benign overfitting, double descent, implicit bias, and grokking) and introduce an exact population-risk objective derived from a single training run without validation data.
  • They report that this objective effectively acts as an SNR preconditioner on top of Adam, improving grokking speed (about 5×), reducing memorization in PINNs/implicit representations, and enhancing DPO fine-tuning under noisy preferences while staying closer to the reference policy (about 3×).

Abstract

We present a non-asymptotic theory of generalization in deep learning where the empirical neural tangent kernel partitions the output space. In directions corresponding to signal, error dissipates rapidly; in the vast orthogonal dimensions corresponding to noise, the kernel's near-zero eigenvalues trap residual error in a test-invisible reservoir. Within the signal channel, minibatch SGD ensures that coherent population signal accumulates via fast linear drift, while idiosyncratic memorization is suppressed into a slow, diffusive random walk. We prove generalization survives even when the kernel evolves \mathcal{O}(1) in operator norm, the full feature-learning regime. This theory naturally explains disparate phenomena in deep learning theory, such as benign overfitting, double descent, implicit bias, and grokking. Lastly, we derive an exact population-risk objective from a single training run with no validation data, for any architecture, loss, or optimizer, and prove that it measures precisely the noise in the signal channel. This objective reduces in practice to an SNR preconditioner on top of Adam, adding one state vector at no extra cost; it accelerates grokking by 5 \times, suppresses memorization in PINNs and implicit neural representations, and improves DPO fine-tuning under noisy preferences while staying 3 \times closer to the reference policy.