CODA: Coordination via On-Policy Diffusion for Multi-Agent Offline Reinforcement Learning

arXiv cs.LG / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces CODA, a diffusion-based multi-agent trajectory generator designed for offline multi-agent reinforcement learning (MARL) to reduce coordination failures caused by static, off-policy data.
  • CODA generates synthetic experience conditioned on the current joint policy, better approximating on-policy co-adaptation during training rather than producing static augmented datasets.
  • The method is algorithm-agnostic and can be plugged into both model-free and model-based offline RL pipelines as an augmentation module.
  • Experiments show CODA improves coordination on continuous polynomial games and achieves strong performance on more complex MaMuJoCo continuous-control benchmarks.
  • The authors argue that earlier diffusion augmentation approaches fall short for MARL coordination because they do not evolve alongside the changing joint policy during training.

Abstract

Offline multi-agent reinforcement learning (MARL) enables policy learning from fixed datasets, but is prone to coordination failure: agents trained on static, off-policy data converge to suboptimal joint behaviours because they cannot co-adapt as their policies change. We introduce CODA (Coordination via On-Policy Diffusion for Multi-Agent Reinforcement Learning), a diffusion-based multi-agent trajectory generator for data augmentation that samples conditioned on the current joint policy, producing synthetic experience which reflects the evolving behaviours of the agents, thereby providing a mechanism for co-adaptation. We find that previous diffusion-based augmentation approaches are insufficient for fostering multi-agent coordination because they produce static augmented datasets that do not evolve as the current joint policy changes during training; CODA resolves this by more closely simulating on-policy learning and is a meaningful step toward coordinated behaviours in the offline setting. CODA is algorithm-agnostic and can be layered onto both model-free and model-based offline reinforcement learning pipelines as an augmentation module. Empirically, CODA not only resolves canonical coordination pathologies in continuous polynomial games but also delivers strong results on the more complex MaMuJoCo continuous-control benchmarks.