AI Navigate

Gauge-Equivariant Intrinsic Neural Operators for Geometry-Consistent Learning of Elliptic PDE Maps

arXiv cs.AI / 3/17/2026

📰 NewsModels & Research

Key Points

  • The paper proposes Gauge-Equivariant Intrinsic Neural Operators (GINO), a neural-operator class for elliptic PDE solution maps that uses intrinsic spectral multipliers and gauge-equivariant nonlinearities to enforce frame-independence.
  • On controlled experiments over the flat torus with known Fourier representations, GINO achieves low operator-approximation error, near machine-precision gauge equivariance, and robustness to structured metric perturbations.
  • The results show strong cross-resolution generalization with small commutation error under restriction/prolongation and preserves structure in regularized exact/coexact decomposition tasks.
  • Ablations link the smoothness of the learned spectral multiplier to stability under geometric perturbations, suggesting geometry-consistent, discretization-robust surrogates for elliptic PDEs on form-valued fields.

Abstract

Learning solution operators of partial differential equations (PDEs) from data has emerged as a promising route to fast surrogate models in multi-query scientific workflows. However, for geometric PDEs whose inputs and outputs transform under changes of local frame (gauge), many existing operator-learning architectures remain representation-dependent, brittle under metric perturbations, and sensitive to discretization changes. We propose Gauge-Equivariant Intrinsic Neural Operators (GINO), a class of neural operators that parameterize elliptic solution maps primarily through intrinsic spectral multipliers acting on geometry-dependent spectra, coupled with gauge-equivariant nonlinearities. This design decouples geometry from learnable functional dependence and enforces consistency under frame transformations. We validate GINO on controlled problems on the flat torus (\mathbb{T}^2), where ground-truth resolvent operators and regularized Helmholtz--Hodge decompositions admit closed-form Fourier representations, enabling theory-aligned diagnostics. Across experiments E1--E6, GINO achieves low operator-approximation error, near machine-precision gauge equivariance, robustness to structured metric perturbations, strong cross-resolution generalization with small commutation error under restriction/prolongation, and structure-preserving performance on a regularized exact/coexact decomposition task. Ablations further link the smoothness of the learned spectral multiplier to stability under geometric perturbations. These results suggest that enforcing intrinsic structure and gauge equivariance yields operator surrogates that are more geometry-consistent and discretization-robust for elliptic PDEs on form-valued fields.