Agent Q-Mix: Selecting the Right Action for LLM Multi-Agent Systems through Reinforcement Learning

arXiv cs.CL / 4/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Agent Q-Mix, a reinforcement learning framework that learns how to select and connect agents in LLM multi-agent systems by treating topology selection as a cooperative MARL problem.
  • It uses decentralized communication decisions with QMIX value factorization, where agents jointly form a round-wise communication graph by choosing communication actions.
  • The architecture combines a topology-aware GNN encoder, GRU-based memory, and per-agent Q-heads within a CTDE (centralized training, decentralized execution) setup.
  • Agent Q-Mix optimizes a reward that trades off task accuracy against token cost, aiming for both performance and efficiency.
  • Across seven coding/reasoning/math benchmarks—including Humanity’s Last Exam (HLE)—the method reports higher average accuracy and better token efficiency/robustness than prior approaches, including a reported 20.8% HLE accuracy with Gemini-3.1-Flash-Lite.

Abstract

Large Language Models (LLMs) have shown remarkable performance in completing various tasks. However, solving complex problems often requires the coordination of multiple agents, raising a fundamental question: how to effectively select and interconnect these agents. In this paper, we propose \textbf{Agent Q-Mix}, a reinforcement learning framework that reformulates topology selection as a cooperative Multi-Agent Reinforcement Learning (MARL) problem. Our method learns decentralized communication decisions using QMIX value factorization, where each agent selects from a set of communication actions that jointly induce a round-wise communication graph. At its core, Agent Q-Mix combines a topology-aware GNN encoder, GRU memory, and per-agent Q-heads under a Centralized Training with Decentralized Execution (CTDE) paradigm. The framework optimizes a reward function that balances task accuracy with token cost. Across seven core benchmarks in coding, reasoning, and mathematics, Agent Q-Mix achieves the highest average accuracy compared to existing methods while demonstrating superior token efficiency and robustness against agent failure. Notably, on the challenging Humanity's Last Exam (HLE) using Gemini-3.1-Flash-Lite as a backbone, Agent Q-Mix achieves 20.8\% accuracy, outperforming Microsoft Agent Framework (19.2\%) and LangGraph (19.2\%), followed by AutoGen and Lobster by OpenClaw. These results underscore the effectiveness of learned, decentralized topology optimization in pushing the boundaries of multi-agent reasoning.