On the Asymptotics of Self-Supervised Pre-training: Two-Stage M-Estimation and Representation Symmetry

arXiv cs.LG / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper develops an asymptotic theory for self-supervised pre-training by modeling it as a two-stage M-estimation problem that links pre-training and downstream fine-tuning more sharply than prior theoretical bounds.
  • It addresses representation-learning identifiability issues where pre-training parameters are only determined up to a group symmetry, using Riemannian geometry to study intrinsic (symmetry-invariant) parameters.
  • The authors connect the intrinsic pre-training representation to downstream prediction through orbit-invariance and precisely characterize the limiting distribution of downstream test risk.
  • They validate the main results across several case studies—spectral pre-training, factor models, and Gaussian mixture models—showing improved problem-specific factors over earlier approaches when the assumptions apply.

Abstract

Self-supervised pre-training, where large corpora of unlabeled data are used to learn representations for downstream fine-tuning, has become a cornerstone of modern machine learning. While a growing body of theoretical work has begun to analyze this paradigm, existing bounds leave open the question of how sharp the current rates are, and whether they accurately capture the complex interaction between pre-training and fine-tuning. In this paper, we address this gap by developing an asymptotic theory of pre-training via two-stage M-estimation. A key challenge is that the pre-training estimator is often identifiable only up to a group symmetry, a feature common in representation learning that requires careful treatment. We address this issue using tools from Riemannian geometry to study the intrinsic parameters of the pre-training representation, which we link with the downstream predictor through a notion of orbit-invariance, precisely characterizing the limiting distribution of the downstream test risk. We apply our main result to several case studies, including spectral pre-training, factor models, and Gaussian mixture models, and obtain substantial improvements in problem-specific factors over prior art when applicable.