Two-Time-Scale Learning Dynamics: A Population View of Neural Network Training

arXiv cs.LG / 3/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a theoretical framework for neural network training based on two-time-scale population dynamics, with fast SGD-like parameter updates and slower selection–mutation dynamics for hyperparameters.
  • It proves the large-population limit for the joint distribution of parameters and hyperparameters and derives a selection–mutation equation for hyperparameter density under strong time-scale separation.
  • For each fixed hyperparameter, the fast parameter dynamics relaxes to a Boltzmann–Gibbs measure, producing an effective fitness that drives the slow evolution.
  • The framework connects population-based learning with bilevel optimization and replicator–mutator models, clarifying when the population mean moves toward the fittest hyperparameter and highlighting the role of noise in balancing exploration and optimization.

Abstract

Population-based learning paradigms, including evolutionary strategies, Population-Based Training (PBT), and recent model-merging methods, combine fast within-model optimisation with slower population-level adaptation. Despite their empirical success, a general mathematical description of the resulting collective training dynamics remains incomplete. We introduce a theoretical framework for neural network training based on two-time-scale population dynamics. We model a population of neural networks as an interacting agent system in which network parameters evolve through fast noisy gradient updates of SGD/Langevin type, while hyperparameters evolve through slower selection--mutation dynamics. We prove the large-population limit for the joint distribution of parameters and hyperparameters and, under strong time-scale separation, derive a selection--mutation equation for the hyperparameter density. For each fixed hyperparameter, the fast parameter dynamics relaxes to a Boltzmann--Gibbs measure, inducing an effective fitness for the slow evolution. The averaged dynamics connects population-based learning with bilevel optimisation and classical replicator--mutator models, yields conditions under which the population mean moves toward the fittest hyperparameter, and clarifies the role of noise and diversity in balancing optimisation and exploration. Numerical experiments illustrate both the large-population regime and the reduced two-time-scale dynamics, and indicate that access to the effective fitness, either in closed form or through population-level estimation, can improve population-level updates.