Sheaf Neural Networks on SPD Manifolds: Second-Order Geometric Representation Learning

arXiv cs.LG / 4/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies two limitations of graph neural networks: they often rely on Euclidean vector features even when matrix-valued (second-order) geometry is needed, and they use shared edge transformations in standard message passing.
  • It proposes the first sheaf neural network designed to operate natively on the symmetric positive definite (SPD) manifold, avoiding projection back to Euclidean space.
  • The method leverages the SPD manifold’s Lie group structure to define well-posed sheaf operators for SPD-valued features.
  • The authors theoretically prove that SPD-valued sheaves are strictly more expressive than Euclidean sheaves, enabling global configurations (global sections) that vector-based sheaves cannot represent.
  • Experiments show the approach can convert rank-1 directional inputs into full-rank SPD matrices and achieves state-of-the-art results on 6 out of 7 MoleculeNet benchmarks, with improved depth robustness via a dual-stream architecture.

Abstract

Graph neural networks face two fundamental challenges rooted in the linear structure of Euclidean vector spaces: (1) Current architectures represent geometry through vectors (directions, gradients), yet many tasks require matrix-valued representations that capture relationships between directions-such as how atomic orientations covary in a molecule. These second-order representations are naturally captured by points on the symmetric positive definite matrices (SPD) manifold; (2) Standard message passing applies shared transformations across edges. Sheaf neural networks address this via edge-specific transformations, but existing formulations remain confined to vector spaces and therefore cannot propagate matrix-valued features. We address both challenges by developing the first sheaf neural network operates natively on the SPD manifold. Our key insight is that the SPD manifold admits a Lie group structure, enabling well-posed analogs of sheaf operators without projecting to Euclidean space. Theoretically, we prove that SPD-valued sheaves are strictly more expressive than Euclidean sheaves: they admit consistent configurations (global sections) that vector-valued sheaves cannot represent, directly translating to richer learned representations. Empirically, our sheaf convolution transforms effectively rank-1 directional inputs into full-rank matrices encoding local geometric structure. Our dual-stream architecture achieves SOTA on 6/7 MoleculeNet benchmarks, with the sheaf framework providing consistent depth robustness.