LieTrunc-QNN: Lie Algebra Truncation and Quantum Expressivity Phase Transition from LiePrune to Provably Stable Quantum Neural Networks

arXiv cs.LG / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes LieTrunc-QNN, a geometric framework that models parameterized quantum circuits as Lie subalgebras and connects trainability to Lie-generated dynamics and reachable-state manifold geometry.
  • It formulates a “capacity–plateau principle,” arguing that increasing effective manifold dimension causes exponential gradient suppression via concentration of measure (barren plateaus).
  • By truncating to structured Lie subalgebras (LieTrunc), the reachable manifold is contracted, preventing concentration and yielding non-degenerate gradients.
  • The authors provide proofs of (i) a trainability lower bound and (ii) an upper bound on Fubini–Study metric rank in terms of the algebraic span of generators, showing expressivity depends on algebraic structure rather than parameter count.
  • Experiments for small qubit counts (n=2–6) report stable gradients and preserved metric rank (e.g., rank=16 at n=6) under LieTrunc, alongside evidence for a scaling law between gradient variance and effective dimension.

Abstract

Quantum Machine Learning (QML) is fundamentally limited by two challenges: barren plateaus (exponentially vanishing gradients) and the fragility of parameterized quantum circuits under noise. Despite extensive empirical studies, a unified theoretical framework remains lacking. We introduce LieTrunc-QNN, an algebraic-geometric framework that characterizes trainability via Lie-generated dynamics. Parameterized quantum circuits are modeled as Lie subalgebras of u(2^n), whose action induces a Riemannian manifold of reachable quantum states. Expressivity is reinterpreted as intrinsic manifold dimension and geometry. We establish a geometric capacity-plateau principle: increasing effective dimension leads to exponential gradient suppression due to concentration of measure. By restricting to structured Lie subalgebras (LieTrunc), the manifold is contracted, preventing concentration and preserving non-degenerate gradients. We prove two main results: (1) a trainability lower bound for LieTrunc-QNN, and (2) that the Fubini-Study metric rank is bounded by the algebraic span of generators, showing expressivity is governed by structure rather than parameter count. Compact Lie subalgebras also provide inherent robustness to perturbations. Importantly, we establish a polynomial trainability regime where gradient variance decays polynomially instead of exponentially. Experiments (n=2-6) validate the theory: LieTrunc-QNN maintains stable gradients and high effective dimension, while random truncation leads to metric rank collapse. At n=6, full metric rank is preserved (rank=16). Results support a scaling law between gradient variance and effective dimension. This work provides a unified geometric framework for QNN design, linking Lie algebra, manifold geometry, and optimization.