A Survey of Multi-Agent Deep Reinforcement Learning with Graph Neural Network-Based Communication

arXiv cs.AI / 4/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper surveys multi-agent reinforcement learning (MARL) methods that use communication learned via graph neural networks (GNNs) over interaction graphs.
  • It highlights a gap in the field: the absence of a clear, explicit structure or framework to distinguish and classify GNN-based communication MARL approaches.
  • The authors propose a generalized GNN-based communication process to make the underlying concepts behind existing methods easier to understand and compare.
  • The survey aims to improve accessibility of ideas and support clearer categorization of techniques in GNN-enabled multi-agent coordination.
  • The work is presented as an arXiv announcement (v1), positioning it as an educational/analytical overview rather than a new system deployment.

Abstract

In multi-agent reinforcement learning (MARL), the integration of a communication mechanism, allowing agents to better learn to coordinate their actions and converge on their objectives by sharing information. Based on an interaction graph, a subclass of methods employs graph neural networks (GNNs) to learn the communication, enabling agents to improve their internal representations by enriching them with information exchanged. With growing research, we note a lack of explicit structure and framework to distinguish and classify MARL approaches with communication based on GNNs. Thus, this paper surveys recent works in this field. We propose a generalized GNN-based communication process with the goal of making the underlying concepts behind the methods more obvious and accessible.