Single-Agent LLMs Outperform Multi-Agent Systems on Multi-Hop Reasoning Under Equal Thinking Token Budgets

arXiv cs.CL / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues, using an information-theoretic view based on the Data Processing Inequality, that with a fixed reasoning-token budget and perfect context utilization, single-agent LLMs (SAS) should be at least as information-efficient as multi-agent systems (MAS) for multi-hop reasoning.
  • It predicts MAS only becomes competitive when single-agent context utilization is degraded or when MAS is allowed to use more compute than SAS.
  • In a controlled study across three model families (Qwen3, DeepSeek-R1-Distill-Llama, and Gemini 2.5), the authors find SAS consistently match or outperform MAS on multi-hop reasoning when reasoning tokens are held constant.
  • The analysis identifies evaluation artifacts—especially in API-based budget control for Gemini 2.5 and in standard benchmarks—that can falsely inflate MAS advantages.
  • The authors conclude that many reported gains from multi-agent approaches are largely explained by unaccounted computation and context effects rather than inherent architectural benefits, emphasizing the need to explicitly control compute–context–coordination trade-offs.

Abstract

Recent work reports strong performance from multi-agent LLM systems (MAS), but these gains are often confounded by increased test-time computation. When computation is normalized, single-agent systems (SAS) can match or outperform MAS, yet the theoretical basis and evaluation methodology behind this comparison remain unclear. We present an information-theoretic argument, grounded in the Data Processing Inequality, suggesting that under a fixed reasoning-token budget and with perfect context utilization, single-agent systems are more information-efficient. This perspective further predicts that multi-agent systems become competitive when a single agent's effective context utilization is degraded, or when more compute is expended. We test these predictions in a controlled empirical study across three model families (Qwen3, DeepSeek-R1-Distill-Llama, and Gemini 2.5), comparing SAS with multiple MAS architectures under matched budgets. We find that SAS consistently match or outperform MAS on multi-hop reasoning tasks when reasoning tokens are held constant. Beyond aggregate performance, we conduct a detailed diagnostic analysis of system behavior and evaluation methodology. We identify significant artifacts in API-based budget control (particularly in Gemini 2.5) and in standard benchmarks, both of which can inflate apparent gains from MAS. Overall, our results suggest that, for multi-hop reasoning tasks, many reported advantages of multi-agent systems are better explained by unaccounted computation and context effects rather than inherent architectural benefits, and highlight the importance of understanding and explicitly controlling the trade-offs between compute, context, and coordination in agentic systems.

Single-Agent LLMs Outperform Multi-Agent Systems on Multi-Hop Reasoning Under Equal Thinking Token Budgets | AI Navigate