Loop Corrections to the Training and Generalization Errors of Random Feature Models
arXiv cs.LG / 4/15/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies random feature models where a neural network is sampled from an initialization ensemble, frozen, and only the readout weights are trained.
- Using a statistical-physics and effective field-theoretic perspective, it analyzes training, test, and generalization errors beyond the mean-kernel (infinite-width) approximation.
- It shows that ensemble-averaged errors depend not only on the mean induced kernel but also on higher-order fluctuation statistics due to the predictor being a nonlinear functional of the random kernel.
- The authors derive “loop corrections” and their scaling laws for finite-width effects, and they validate the theoretical predictions with experiments.
Related Articles

RAG in Practice — Part 4: Chunking, Retrieval, and the Decisions That Break RAG
Dev.to
Why dynamically routing multi-timescale advantages in PPO causes policy collapse (and a simple decoupled fix) [R]
Reddit r/MachineLearning

How AI Interview Assistants Are Changing Job Preparation in 2026
Dev.to

Consciousness in Artificial Intelligence: Insights from the Science ofConsciousness
Dev.to

NEW PROMPT INJECTION
Dev.to