Loop Corrections to the Training and Generalization Errors of Random Feature Models

arXiv cs.LG / 4/15/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies random feature models where a neural network is sampled from an initialization ensemble, frozen, and only the readout weights are trained.
  • Using a statistical-physics and effective field-theoretic perspective, it analyzes training, test, and generalization errors beyond the mean-kernel (infinite-width) approximation.
  • It shows that ensemble-averaged errors depend not only on the mean induced kernel but also on higher-order fluctuation statistics due to the predictor being a nonlinear functional of the random kernel.
  • The authors derive “loop corrections” and their scaling laws for finite-width effects, and they validate the theoretical predictions with experiments.

Abstract

We investigate random feature models in which neural networks sampled from a prescribed initialization ensemble are frozen and used as random features, with only the readout weights optimized. Adopting a statistical-physics viewpoint, we study the training, test, and generalization errors beyond the mean-kernel approximation. Since the predictor is a nonlinear functional of the induced random kernel, the ensemble-averaged errors depend not only on the mean kernel but also on higher-order fluctuation statistics. Within an effective field-theoretic framework, these finite-width contributions naturally appear as loop corrections. We derive the loop corrections to the training, test, and generalization errors, obtain their scaling laws, and support the theory with experimental verification.