Spectral Graph Sparsification Preserves Representation Geometry in Graph Neural Networks

arXiv cs.LG / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether spectral graph sparsification preserves the geometry of learned embeddings in graph neural networks, going beyond preserving Laplacian quadratic forms.
  • It proves that for polynomial-filter GNNs, any ε-spectral sparsifier causes only O(ε) perturbations to polynomial graph filters, multilayer hidden representations, and the corresponding Gram matrices.
  • These theoretical bounds imply stability of embedding-space properties such as squared pairwise distances, class means, and covariance structure under sparsification.
  • The authors also show finite-time training stability: with smoothness and boundedness assumptions, gradient descent on dense vs. sparsified graphs yields weight trajectories whose separation grows proportionally to sparsification distortion.
  • Empirical results using effective-resistance sparsification on synthetic and real datasets (e.g., FashionMNIST, Cora, Paul15) support the predicted perturbation chain, with low divergence in Gram matrices and training dynamics and strong downstream neighborhood/class-centroid stability.

Abstract

Spectral graph sparsification is a classical tool for reducing graph complexity while preserving Laplacian quadratic forms. In graph neural networks (GNNs), sparsification is often used to accelerate computation while maintaining predictive performance. In this work, we study a complementary representation-level question: does sparsification preserve the geometry of learned embeddings? For polynomial-filter GNNs, we prove that any \epsilon-spectral sparsifier induces O(\epsilon) perturbations in polynomial graph filters, multilayer hidden representations, and their Gram matrices. These guarantees imply stability of squared pairwise distances, class means, and covariance structure in embedding space. We further establish finite-time training stability: under smoothness and boundedness assumptions, gradient descent on dense and sparsified graphs produces weight trajectories whose separation grows at most proportionally to the sparsification distortion. Empirically, effective-resistance sparsification validates the predicted perturbation chain on synthetic graphs and preserves hidden representation geometry on real datasets. In our experiments, the gram matrix and training dynamics show low divergence even under substantial sparsification, consistent with the predicted stability under spectral sparsification. Hidden Gram preservation strongly predicts neighborhood preservation and class-centroid stability across FashionMNIST, Cora, and Paul15. Together, these results show that spectral sparsification preserves not only graph operators, but also the representation geometry that supports downstream use of GNN embeddings for interpretability.