On the Loss Landscape Geometry of Regularized Deep Matrix Factorization: Uniqueness and Sharpness

arXiv stat.ML / 3/31/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes how ubiquitous weight decay (ℓ^2 regularization) shapes the loss landscape in deep matrix factorization / deep linear networks trained with squared-error loss.
  • It proves that, for almost all depths and regularization parameter values (excluding a Lebesgue measure-zero set), the training problem admits a unique end-to-end minimizer for any target matrix representable via the factorization.
  • The authors show that the Hessian spectrum is constant across all minimizers in the regularized deep scalar factorization setting, indicating strong structure in the curvature of the loss landscape.
  • They further establish that when the target matrix is not in the measure-zero exceptional set, the Frobenius norm of each layer remains constant across all minimizers and leads to a global lower bound on the Hessian trace at any minimizer.
  • Finally, the paper derives a regularization-threshold condition above which the unique minimizer collapses to the zero solution.

Abstract

Weight decay is ubiquitous in training deep neural network architectures. Its empirical success is often attributed to capacity control; nonetheless, our theoretical understanding of its effect on the loss landscape and the set of minimizers remains limited. In this paper, we show that \ell^2-regularized deep matrix factorization/deep linear network training problems with squared-error loss admit a unique end-to-end minimizer for all target matrices subject to factorization, except for a set of Lebesgue measure zero formed by the depth and the regularization parameter. This observation reveals fundamental properties of the loss landscape of regularized deep matrix factorization problems: the Hessian spectrum is constant across all minimizers of the regularized deep scalar factorization problem with squared-error loss. Moreover, we show that, in regularized deep matrix factorization problems with squared-error loss, if the target matrix does not belong to the Lebesgue measure-zero set, then the Frobenius norm of each layer is constant across all minimizers. This, in turn, yields a global lower bound on the trace of the Hessian evaluated at any minimizer of the regularized deep matrix factorization problem. Furthermore, we establish a critical threshold for the regularization parameter above which the unique end-to-end minimizer collapses to zero.