The Phase Is the Gradient: Equilibrium Propagation for Frequency Learning in Kuramoto Networks

arXiv cs.LG / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper shows that in stable-equilibrium Kuramoto oscillator networks, the phase displacement induced by weak output “nudging” equals the gradient of the loss with respect to natural frequencies as the nudging strength β→0.
  • It extends equilibrium propagation results by treating natural frequency as a learnable parameter, and demonstrates that on sparse layered architectures frequency learning can outperform coupling-weight learning from converged seeds (96.0% vs 83.3% at matched parameter counts).
  • The authors argue that the ~50% convergence failure rate seen under random initialization is due to properties of the loss landscape rather than an incorrect gradient estimate.
  • A topology-aware spectral seeding strategy is proposed and empirically shown to eliminate convergence failures across tested settings (e.g., 46/100→100/100 seeds on the primary task, and 50/50 on a secondary K-only training task plus larger architectures).

Abstract

We prove that in a coupled Kuramoto oscillator network at stable equilibrium, the physical phase displacement under weak output nudging is the gradient of the loss with respect to natural frequencies, with equality as the nudging strength beta tends to zero. Prior oscillator equilibrium propagation work explicitly set aside natural frequency as a learnable parameter; we show that on sparse layered architectures, frequency learning outperforms coupling-weight learning among converged seeds (96.0% vs. 83.3% at matched parameter counts, p = 1.8e-12). The approximately 50% convergence failure rate under random initialization is a loss-landscape property, not a gradient error; topology-aware spectral seeding eliminates it in all settings tested (46/100 to 100/100 seeds on the primary task; 50/50 on a second task, K-only training, and a larger architecture).