Learning to Communicate: Toward End-to-End Optimization of Multi-Agent Language Systems

arXiv cs.AI / 4/25/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that multi-agent LLM research often treats inter-agent communication as a fixed text/protocol interface rather than something that can be jointly optimized with reasoning.
  • It proposes DiffMAS, a training framework that treats latent (non-text) communication—implemented via internal representations such as key-value caches—as a learnable component of multi-agent systems.
  • DiffMAS uses parameter-efficient supervised training over multi-agent latent trajectories so agents can learn how to encode and interpret information across interactions.
  • Experiments on mathematical reasoning, scientific QA, code generation, and commonsense benchmarks show consistent improvements in reasoning accuracy and decoding stability versus single-agent inference, text-based multi-agent setups, and prior latent communication approaches.
  • Reported results include 26.7% on AIME24 and 20.2% on GPQA-Diamond, along with stable gains across multiple reasoning benchmarks.

Abstract

Multi-agent systems built on large language models have shown strong performance on complex reasoning tasks, yet most work focuses on agent roles and orchestration while treating inter-agent communication as a fixed interface. Latent communication through internal representations such as key-value caches offers a promising alternative to text-based protocols, but existing approaches do not jointly optimize communication with multi-agent reasoning. Therefore we propose DiffMAS, a training framework that treats latent communication as a learnable component of multi-agent systems. DiffMAS performs parameter-efficient supervised training over multi-agent latent trajectories, enabling agents to jointly learn how information should be encoded and interpreted across interactions. Experiments on mathematical reasoning, scientific QA, code generation, and commonsense benchmarks show that DiffMAS consistently improves reasoning accuracy and decoding stability over single-agent inference, text-based multi-agent systems, and prior latent communication methods, achieving 26.7% on AIME24, 20.2% on GPQA-Diamond, and consistent gains across reasoning benchmarks.