From Reachability to Learnability: Geometric Design Principles for Quantum Neural Networks

arXiv stat.ML / 3/26/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that, unlike classical deep networks, quantum neural networks (QNNs) cannot rely on depth or mere state reachability alone to achieve effective feature learning through adaptive geometry.
  • It models encoded data as an embedded manifold in complex projective space and analyzes the effect of infinitesimal unitary actions using Lie-algebra directions.
  • The authors introduce Classical-to-Lie-algebra (CLA) maps and an “almost Complete Local Selectivity” (aCLS) criterion combining directional completeness with data-dependent local selectivity.
  • They show a tradeoff: data-independent trainable unitaries can be directionally complete but non-selective (good for rigid reorientation), while pure data encodings can be selective but non-tunable (fixed deformations), meaning true geometric flexibility requires joint dependence on data and trainable weights.
  • Numerical experiments indicate that QNN designs satisfying aCLS via data re-uploading outperform non-tunable approaches while using substantially fewer gate operations, and the work reframes QNN design as controllable geometry of hidden quantum representations.

Abstract

Classical deep networks are effective because depth enables adaptive geometric deformation of data representations. In quantum neural networks (QNNs), however, depth or state reachability alone does not guarantee this feature-learning capability. We study this question in the pure-state setting by viewing encoded data as an embedded manifold in \mathbb{C}P^{2^n-1} and analysing infinitesimal unitary actions through Lie-algebra directions. We introduce Classical-to-Lie-algebra (CLA) maps and the criterion of almost Complete Local Selectivity (aCLS), which combines directional completeness with data-dependent local selectivity. Within this framework, we show that data-independent trainable unitaries are complete but non-selective, i.e. learnable rigid reorientations, whereas pure data encodings are selective but non-tunable, i.e. fixed deformations. Hence, geometric flexibility requires a non-trivial joint dependence on data and trainable weights. We further show that accessing high-dimensional deformations of many-qubit state manifolds requires parametrised entangling directions; fixed entanglers such as CNOT alone do not provide adaptive geometric control. Numerical examples validate that aCLS-satisfying data re-uploading models outperform non-tunable schemes while requiring only a quarter of the gate operations. Thus, the resulting picture reframes QNN design from state reachability to controllable geometry of hidden quantum representations.