Dynamical structure of vanishing gradient and overfitting in multi-layer perceptrons

arXiv cs.LG / 4/6/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a minimal dynamical system model (inspired by Fukumizu and Amari) to explain how vanishing gradients and overfitting arise during gradient-descent training of multi-layer perceptrons (MLPs).
  • It describes learning trajectories that can pass through plateau and near-optimal regions, each characterized as saddle structures, before eventually moving into an overfitting region.
  • Under conditions on the training data, the authors prove (with high probability) that the overfitting region collapses to a single attractor up to symmetries, effectively corresponding to the overfitting outcome.
  • The authors also show that with a finite noisy dataset, an MLP cannot converge to the theoretical optimum and instead must converge to an overfitting solution.

Abstract

Vanishing gradient and overfitting are two of the most extensively studied problems in the literature about machine learning. However, they are frequently considered in some asymptotic setting, which obscure the underlying dynamical mechanisms responsible for their emergence. In this paper, we aim to provide a clear dynamical description of learning in multi-layer perceptrons. To this end, we introduce a minimal model, inspired by studies by Fukumizu and Amari, to investigate vanishing gradients and overfitting in MLPs trained via gradient descent. Within this model, we show that the learning dynamics may pass through plateau regions and near-optimal regions during training, both of which consist of saddle structures, before ultimately converging to the overfitting region. Under suitable conditions on the training dataset, we prove that, with high probability, the overfitting region collapses to a single attractor modulo symmetry, which corresponds to the overfitting. Moreover, we show that any MLP trained on a finite noisy dataset cannot converge to the theoretical optimum and instead necessarily converges to an overfitting solution.