Concept Graph Convolutions: Message Passing in the Concept Space
arXiv cs.LG / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that Graph Neural Networks (GNNs) are hard to trust because their reasoning is opaque, and existing concept-based explanations do not fully reveal the message-passing process itself.
- It introduces the Concept Graph Convolution, a new graph convolution layer that performs message passing using both raw node representations and node-level concept representations.
- The method leverages structural edge weights and attention-based edge weights to better control how information is propagated across the graph.
- The authors also propose a “pure” variant that performs message passing only in the concept space, aiming for more direct interpretability.
- Experimental results indicate competitive accuracy while providing improved visibility into how node concepts evolve across successive convolution steps.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to