Generalization Bounds for Spectral GNNs via Fourier Domain Analysis

arXiv cs.LG / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies spectral graph neural networks by transforming them into the graph Fourier domain, where each layer acts as an element-wise frequency update, making depth and polynomial order explicit.
  • It proves that Gaussian complexity is invariant under the Graph Fourier Transform, enabling the authors to derive data-dependent generalization bounds that also account for depth and order.
  • The analysis includes stability estimates, linking model behavior to whether perturbations are amplified through layers and polynomial expansions.
  • For linear variants, the derived bounds are tighter than existing results, and experiments on real graphs show that a data-dependent term correlates with the observed generalization gap across different polynomial bases.
  • The findings suggest practical guidance for choosing polynomial bases and avoiding frequency amplification across layers to improve generalization.

Abstract

Spectral graph neural networks learn graph filters, but their behavior with increasing depth and polynomial order is not well understood. We analyze these models in the graph Fourier domain, where each layer becomes an element-wise frequency update, separating the fixed spectrum from trainable parameters and making depth and order explicit. In this setting, we show that Gaussian complexity is invariant under the Graph Fourier Transform, which allows us to derive data-dependent, depth, and order-aware generalization bounds together with stability estimates. In the linear case, our bounds are tighter, and on real graphs, the data-dependent term correlates with the generalization gap across polynomial bases, highlighting practical choices that avoid frequency amplification across layers.