AI Navigate

Lyapunov Stable Graph Neural Flow

arXiv cs.LG / 3/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper bridges graph neural networks with control theory to propose a defense framework based on integer- and fractional-order Lyapunov stability.
  • It constrains the GNN feature-update dynamics rather than relying on resource-heavy adversarial training or data purification.
  • It proposes an adaptive, learnable Lyapunov function with a novel projection mechanism that maps the network's state into a stable space, offering provable stability guarantees.
  • The stability mechanism is orthogonal to existing defenses and can be integrated with adversarial training for cumulative robustness.
  • Experiments show the Lyapunov-stable graph neural flows substantially outperform base neural flows and state-of-the-art baselines across standard benchmarks and various adversarial attack scenarios.

Abstract

Graph Neural Networks (GNNs) are highly vulnerable to adversarial perturbations in both topology and features, making the learning of robust representations a critical challenge. In this work, we bridge GNNs with control theory to introduce a novel defense framework grounded in integer- and fractional-order Lyapunov stability. Unlike conventional strategies that rely on resource-heavy adversarial training or data purification, our approach fundamentally constrains the underlying feature-update dynamics of the GNN. We propose an adaptive, learnable Lyapunov function paired with a novel projection mechanism that maps the network's state into a stable space, thereby offering theoretically provable stability guarantees. Notably, this mechanism is orthogonal to existing defenses, allowing for seamless integration with techniques like adversarial training to achieve cumulative robustness. Extensive experiments demonstrate that our Lyapunov-stable graph neural flows substantially outperform base neural flows and state-of-the-art baselines across standard benchmarks and various adversarial attack scenarios.