Hamiltonian Mechanics of Feature Learning: Bottleneck Structure in Leaky ResNets

arXiv stat.ML / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes Leaky ResNets as a family interpolating between ResNets and fully connected networks via an effective depth hyperparameter \(\tilde{L}\).
  • It introduces “representation geodesics” as continuous paths in representation space (analogous to NeuralODE trajectories) that minimize network parameter norm in an infinite-depth setting.
  • A Hamiltonian/Lagrangian reformulation shows feature learning is governed by a kinetic term (penalizing large layer derivatives \(\partial_p A_p\)) and a potential term (encouraging low-dimensional representations via the “Cost of Identity”).
  • The authors use this dynamics-based balance to explain emergence of bottleneck behavior: for large \(\tilde{L}\), rapid transitions into a low-dimensional manifold are followed by slower motion within it, before returning to higher-dimensional outputs.
  • They validate the intuition with training using an adaptive layer step-size designed to account for the separation of timescales.

Abstract

We study Leaky ResNets, which interpolate between ResNets and Fully-Connected nets depending on an 'effective depth' hyper-parameter \tilde{L}. In the infinite depth limit, we study 'representation geodesics' A_{p}: continuous paths in representation space (similar to NeuralODEs) from input p=0 to output p=1 that minimize the parameter norm of the network. We give a Lagrangian and Hamiltonian reformulation, which highlight the importance of two terms: a kinetic energy which favors small layer derivatives \partial_{p}A_{p} and a potential energy that favors low-dimensional representations, as measured by the 'Cost of Identity'. The balance between these two forces offers an intuitive understanding of feature learning in ResNets. We leverage this intuition to explain the emergence of a bottleneck structure, as observed in previous work: for large \tilde{L} the potential energy dominates and leads to a separation of timescales, where the representation jumps rapidly from the high dimensional inputs to a low-dimensional representation, move slowly inside the space of low-dimensional representations, before jumping back to the potentially high-dimensional outputs. Inspired by this phenomenon, we train with an adaptive layer step-size to adapt to the separation of timescales.