AI Navigate

GIST: Gauge-Invariant Spectral Transformers for Scalable Graph Neural Operators

arXiv cs.LG / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • GIST introduces Gauge-Invariant Spectral Transformers for graph neural operators, achieving end-to-end O(N) complexity via random projections while preserving gauge invariance.
  • It addresses two failure modes in spectral methods: cubic-complexity exact approaches that break gauge invariance due to solver artifacts, and efficient approximations that sacrifice gauge symmetry, leading to poor generalization across different spectral decompositions.
  • The method enables discretization-invariant learning with bounded mismatch error and allows parameter transfer across arbitrary mesh resolutions for neural operator tasks.
  • Empirically, GIST matches state-of-the-art on standard graph benchmarks (e.g., 99.50% micro-F1 on PPI) and scales to mesh-based neural-operator benchmarks with up to 750K nodes, achieving strong aerodynamic predictions on DrivAerNet and DrivAerNet++ datasets.
  • The architecture relies on inner-product-based attention on projected embeddings to preserve gauge invariance while maintaining scalable training.

Abstract

Adapting transformer positional encoding to meshes and graph-structured data presents significant computational challenges: exact spectral methods require cubic-complexity eigendecomposition and can inadvertently break gauge invariance through numerical solver artifacts, while efficient approximate methods sacrifice gauge symmetry by design. Both failure modes cause catastrophic generalization in inductive learning, where models trained with one set of numerical choices fail when encountering different spectral decompositions of similar graphs or discretizations of the same mesh. We propose GIST (Gauge-Invariant Spectral Transformers), a new graph transformer architecture that resolves this challenge by achieving end-to-end \mathcal{O}(N) complexity through random projections while algorithmically preserving gauge invariance via inner-product-based attention on the projected embeddings. We prove GIST achieves discretization-invariant learning with bounded mismatch error, enabling parameter transfer across arbitrary mesh resolutions for neural operator applications. Empirically, GIST matches state-of-the-art on standard graph benchmarks (e.g., achieving 99.50% micro-F1 on PPI) while uniquely scaling to mesh-based Neural Operator benchmarks with up to 750K nodes, achieving state-of-the-art aerodynamic prediction on the challenging DrivAerNet and DrivAerNet++ datasets.