Towards Initialization-dependent and Non-vacuous Generalization Bounds for Overparameterized Shallow Neural Networks

arXiv cs.LG / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses “benign overfitting” in overparameterized shallow neural networks by linking generalization to the distance from initialization rather than parameter count alone.
  • It argues that prior initialization-dependent analyses were ineffective because their bounds depend on the spectral norm of the initialization matrix, which can grow with width.
  • The authors derive the first fully initialization-dependent complexity bounds for shallow networks with general Lipschitz activations, achieving only logarithmic dependence on width.
  • The proposed bounds are based on the path-norm of the distance from initialization, using a new “peeling” technique to manage the associated technical constraints.
  • The work provides an accompanying lower bound (tight up to a constant factor) and empirical comparisons showing the resulting generalization bounds are non-vacuous for overparameterized settings.

Abstract

Overparameterized neural networks often show a benign overfitting property in the sense of achieving excellent generalization behavior despite the number of parameters exceeding the number of training examples. A promising direction to explain benign overfitting is to relate generalization to the norm of distance from initialization, motivated by the empirical observations that this distance is often significantly smaller than the norm itself. However, the existing initialization-dependent complexity analyses cannot fully exploit the power of initialization since the associated bounds depend on the spectral norm of the initialization matrix, which can scale as a square-root function of the width and are therefore not effective for overparameterized models. In this paper, we develop the first \emph{fully} initialization-dependent complexity bounds for shallow neural networks with general Lipschitz activation functions, which enjoys a logarithmic dependency on the width. Our bounds depend on the path-norm of the distance from initialization, which are derived by introducing a new peeling technique to handle the challenge along with the initialization-dependent constraint. We also develop a lower bound tight up to a constant factor. Finally, we conduct empirical comparisons and show that our generalization analysis implies non-vacuous bounds for overparameterized networks.