Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs

arXiv stat.ML / 4/15/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes, for orthogonal inputs and small initialization, the gradient flow behavior of one-hidden-layer ReLU neural networks trained with mean squared error (square loss).
  • It provides a precise characterization showing that gradient flow converges to zero loss even though the training problem is non-convex.
  • The authors characterize the network’s implicit bias, arguing that training favors a minimum variation norm solution among those reaching low loss.
  • The study quantifies the “initial alignment” phenomenon and proves that training follows a particular saddle-to-saddle dynamical path.

Abstract

The training of neural networks by gradient descent methods is a cornerstone of the deep learning revolution. Yet, despite some recent progress, a complete theory explaining its success is still missing. This article presents, for orthogonal input vectors, a precise description of the gradient flow dynamics of training one-hidden layer ReLU neural networks for the mean squared error at small initialisation. In this setting, despite non-convexity, we show that the gradient flow converges to zero loss and characterise its implicit bias towards minimum variation norm. Furthermore, some interesting phenomena are highlighted: a quantitative description of the initial alignment phenomenon and a proof that the process follows a specific saddle to saddle dynamics.