Learning Latent Graph Geometry via Fixed-Point Schr\"odinger-Type Activation: A Theoretical Study

arXiv stat.ML / 4/28/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes neural network layers defined as stationary states of dissipative Schrödinger-type dynamics on a learned latent graph, yielding differentiable implicit graph layers on stable branches.
  • It introduces learning the latent graph by optimizing over stratified moduli spaces of weighted graphs, using a non-degenerate Kähler–Hessian metric to keep natural-gradient descent and “face crossing” mathematically well posed.
  • The authors show equivalences between a multilayer stationary network and an exact global stationary problem on a constructed “supra-graph,” plus a penalized global relaxation whose stationary states converge to the exact solution as the penalty grows.
  • It derives reverse-mode differentiation as an adjoint method of the global system, and proves convergence of the penalized adjoint to the exact adjoint under a limit.
  • Under strong monotonicity and admissible-lift assumptions, the paper establishes that multiple architecture families (resolvent feed-forward, graph-stationary, supra-graph stationary, and sheaf-based unitary-connection models) share coincident hypothesis classes, enabling complexity bounds driven by sparse graph/supra-graph geometry.

Abstract

We study neural architectures in which each hidden layer is defined by the stationary state of a dissipative Schr\"odinger-type dynamics on a learned latent graph. On stable branches, the local stationary problem defines a differentiable implicit graph layer. To learn the graph itself, we optimize over the stratified moduli space of weighted graphs and equip each stratum with a non-degenerate K\"ahler-Hessian metric that keeps natural-gradient descent and face crossing well posed. We then show that a multilayer stationary network is equivalent to an exact global stationary problem on a supra-graph, and that it admits a penalized global relaxation whose stationary states converge to the exact one as the penalty parameter tends to infinity. Reverse-mode differentiation is recovered as the adjoint of the exact global system, and the penalized adjoint converges to it in the same limit. Finally, under finite-dimensional strong-monotonicity and admissible-lift assumptions, the corresponding represented hypothesis classes coincide among resolvent feed-forward networks, graph-stationary networks, supra-graph stationary systems, and sheaf-based architectures with unitary connection. The resulting structural identifications yield complexity bounds controlled by sparse graph or supra-graph geometry rather than dense ambient connectivity.