On Halting vs Converging in Recurrent Graph Neural Networks

arXiv cs.LG / 4/29/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies three variants of Recurrent Graph Neural Networks (RGNNs): converging RGNNs (all node states stabilize), output-converging RGNNs (only outputs stabilize), and halting RGNNs (a per-node halting classifier decides when to stop).
  • It proves expressiveness relationships among them on undirected graphs, showing converging RGNNs are as expressive as graded-bisimulation-invariant halting RGNNs, while output-converging RGNNs are at least as expressive.
  • The authors connect these expressiveness results to formal logic, concluding that (relative to MSO-expressible classifiers) converging RGNNs express exactly graded modal mu-calculus (μGML), and output-converging RGNNs express at least μGML.
  • The theoretical results remain valid even with practical constraints, including restriction to ReLU networks with sum aggregation.
  • To simulate halting RGNNs using converging ones despite local asynchrony, the paper introduces a coordination method (“traffic-light” protocol) and resolves an open question from Bollen et al. (2025), confirming full μGML expressiveness for the RGNN model of Pflueger et al. (2024) under guaranteed convergence.

Abstract

Recurrent Graph Neural Networks (RGNNs) extend standard GNNs by iterating message-passing until some stopping condition is met. Various RGNN models have been proposed in the literature. In this paper, we study three such models: converging RGNNs, where all vertex representations must stabilise; output-converging RGNNs, where only the output classifications must stabilise; and halting RGNNs, where a per-vertex halting classifier determines when to stop. We establish expressiveness relationships between these models: over undirected graphs, converging RGNNs are equally expressive as graded-bisimulation-invariant halting RGNNs, while output-converging RGNNs are at least as expressive. Combined with prior results on halting RGNNs, this shows that, relative to the classifiers expressible in monadic second-order logic (MSO), converging RGNNs express exactly the graded modal \mu-calculus (\muGML), and output-converging RGNNs express at least \muGML. These results hold even when restricting to ReLU networks with sum aggregation. The main technical challenge is simulating halting RGNNs by converging ones: without a global halting classifier, vertices may locally decide to halt at different times, causing desynchronisation. We develop a "traffic-light" protocol that enables vertices to coordinate despite this asynchrony. Our results answer an open question from Bollen et al. (2025) and show that the RGNN model of Pflueger et al. (2024) retains full \muGML expressiveness even when convergence is guaranteed.