On Interpolation Formulas Describing Neural Network Generalization
arXiv cs.LG / 3/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The work extends Domingos' interpolation formula to stochastic training by introducing a stochastic gradient kernel via a continuous-time diffusion approximation.
- It proves stochastic versions of Domingos' theorems and shows the expected network output has a kernel-machine representation with optimizer-specific weighting, reflecting loss-dependent contributions and gradient alignment along training trajectories.
- It links generalization error to the null space of the integral operator induced by the stochastic gradient kernel, and provides a unified interpretation of diffusion models and GANs as stage-wise corrections shaped by geometry.
- It presents numerical experiments illustrating the evolution of implicit kernels during optimization, supporting a feature-space memory view where test predictions arise from kernel-weighted retrieval of stored tangent features.
広告
Related Articles

GDPR and AI Training Data: What You Need to Know Before Training on Personal Data
Dev.to
Edge-to-Cloud Swarm Coordination for heritage language revitalization programs with embodied agent feedback loops
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

AI Crawler Management: The Definitive Guide to robots.txt for AI Bots
Dev.to

Data Sovereignty Rules and Enterprise AI
Dev.to