There Will Be a Scientific Theory of Deep Learning

arXiv stat.ML / 4/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that a unified, scientific theory of deep learning is starting to emerge, aimed at characterizing key properties and statistics across neural networks’ training dynamics, hidden representations, final weights, and performance.
  • It synthesizes ongoing research into five categories—idealized solvable settings, tractable limits, simple macroscopic laws, hyperparameter-focused theories, and universal behaviors—to support the case for such a theory.
  • The proposed “learning mechanics” framing emphasizes dynamics during training, coarse aggregate statistics, and falsifiable quantitative predictions, positioning the theory as a mechanics of learning.
  • The authors connect learning mechanics with statistical and information-theoretic approaches and suggest a mutually beneficial link with mechanistic interpretability.
  • The paper also addresses skepticism about whether fundamental theory is possible or valuable, and points to open research directions plus beginner-friendly guidance via an associated website.

Abstract

In this paper, we make the case that a scientific theory of deep learning is emerging. By this we mean a theory which characterizes important properties and statistics of the training process, hidden representations, final weights, and performance of neural networks. We pull together major strands of ongoing research in deep learning theory and identify five growing bodies of work that point toward such a theory: (a) solvable idealized settings that provide intuition for learning dynamics in realistic systems; (b) tractable limits that reveal insights into fundamental learning phenomena; (c) simple mathematical laws that capture important macroscopic observables; (d) theories of hyperparameters that disentangle them from the rest of the training process, leaving simpler systems behind; and (e) universal behaviors shared across systems and settings which clarify which phenomena call for explanation. Taken together, these bodies of work share certain broad traits: they are concerned with the dynamics of the training process; they primarily seek to describe coarse aggregate statistics; and they emphasize falsifiable quantitative predictions. We argue that the emerging theory is best thought of as a mechanics of the learning process, and suggest the name learning mechanics. We discuss the relationship between this mechanics perspective and other approaches for building a theory of deep learning, including the statistical and information-theoretic perspectives. In particular, we anticipate a symbiotic relationship between learning mechanics and mechanistic interpretability. We also review and address common arguments that fundamental theory will not be possible or is not important. We conclude with a portrait of important open directions in learning mechanics and advice for beginners. We host further introductory materials, perspectives, and open questions at learningmechanics.pub.