k-Maximum Inner Product Attention for Graph Transformers and the Expressive Power of GraphGPS The Expressive Power of GraphGPS

arXiv cs.LG / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes k-Maximum Inner Product (k-MIP) attention for graph transformers, using a top-k selection of key nodes per query to avoid the quadratic cost of all-to-all attention on large graphs.
  • By combining top-k sparsification with an attention score computation using symbolic matrices, k-MIP attention achieves linear memory complexity and reports up to ~10× speedups over all-to-all attention.
  • The method enables processing graphs with over 500k nodes on a single NVIDIA A100 GPU while maintaining strong empirical performance on multiple benchmarks.
  • The authors provide theoretical guarantees that k-MIP transformers can approximate any full-attention transformer to arbitrary precision, i.e., they do not reduce expressive power in the studied sense.
  • The paper also analyzes the expressive capacity of the GraphGPS framework when equipped with this attention and links performance to graph distinguishing power via the S-SEG-WL test, then validates results on several datasets.

Abstract

Graph transformers have shown promise in overcoming limitations of traditional graph neural networks, such as oversquashing and difficulties in modelling long-range dependencies. However, their application to large-scale graphs is hindered by the quadratic memory and computational complexity of the all-to-all attention mechanism. Although alternatives such as linearized attention and restricted attention patterns have been proposed, these often degrade performance or limit expressive power. To better balance efficiency and effectiveness, we introduce k-Maximum Inner Product (k-MIP) attention for graph transformers. k-MIP attention selects the most relevant key nodes per query via a top-k operation, yielding a sparse yet flexible attention pattern. Combined with an attention score computation based on symbolic matrices, this results in linear memory complexity and practical speedups of up to an order of magnitude compared to all-to-all attention, enabling the processing of graphs with over 500k nodes on a single A100 GPU. We provide a theoretical analysis of expressive power, showing that k-MIP attention does not compromise the expressiveness of graph transformers: specifically, we prove that k-MIP transformers can approximate any full-attention transformer to arbitrary precision. In addition, we analyze the expressive power of the GraphGPS framework, in which we integrate our attention mechanism, and establish an upper bound on its graph distinguishing capability in terms of the S-SEG-WL test. Finally, we validate our approach on the Long Range Graph Benchmark, the City-Networks benchmark, and two custom large-scale inductive point cloud datasets, consistently ranking among the top-performing scalable graph transformers.