Concept Graph Convolutions: Message Passing in the Concept Space

arXiv cs.LG / 4/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that Graph Neural Networks (GNNs) are hard to trust because their reasoning is opaque, and existing concept-based explanations do not fully reveal the message-passing process itself.
  • It introduces the Concept Graph Convolution, a new graph convolution layer that performs message passing using both raw node representations and node-level concept representations.
  • The method leverages structural edge weights and attention-based edge weights to better control how information is propagated across the graph.
  • The authors also propose a “pure” variant that performs message passing only in the concept space, aiming for more direct interpretability.
  • Experimental results indicate competitive accuracy while providing improved visibility into how node concepts evolve across successive convolution steps.

Abstract

The trust in the predictions of Graph Neural Networks is limited by their opaque reasoning process. Prior methods have tried to explain graph networks via concept-based explanations extracted from the latent representations obtained after message passing. However, these explanations fall short of explaining the message passing process itself. To this aim, we propose the Concept Graph Convolution, the first graph convolution designed to operate on node-level concepts for improved interpretability. The proposed convolutional layer performs message passing on a combination of raw and concept representations using structural and attention-based edge weights. We also propose a pure variant of the convolution, only operating in the concept space. Our results show that the Concept Graph Convolution allows to obtain competitive task accuracy, while enabling an increased insight into the evolution of concepts across convolutional steps.