Sheaf Neural Networks on SPD Manifolds: Second-Order Geometric Representation Learning
arXiv cs.LG / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper identifies two limitations of graph neural networks: they often rely on Euclidean vector features even when matrix-valued (second-order) geometry is needed, and they use shared edge transformations in standard message passing.
- It proposes the first sheaf neural network designed to operate natively on the symmetric positive definite (SPD) manifold, avoiding projection back to Euclidean space.
- The method leverages the SPD manifold’s Lie group structure to define well-posed sheaf operators for SPD-valued features.
- The authors theoretically prove that SPD-valued sheaves are strictly more expressive than Euclidean sheaves, enabling global configurations (global sections) that vector-based sheaves cannot represent.
- Experiments show the approach can convert rank-1 directional inputs into full-rank SPD matrices and achieves state-of-the-art results on 6 out of 7 MoleculeNet benchmarks, with improved depth robustness via a dual-stream architecture.
Related Articles

Just what the doctor ordered: how AI could help China bridge the medical resources gap
SCMP Tech
Why don't Automatic speech Recognition models use prompting? [D]
Reddit r/MachineLearning

Automating Advanced Customization in Your Music Studio
Dev.to

CoTracker3: Simpler and Better Point Tracking by Pseudo-Labelling Real Videos
Dev.to

My AI Agent Over-Corrected Itself — So I Built Metabolic Regulation
Dev.to