Do Agent Societies Develop Intellectual Elites? The Hidden Power Laws of Collective Cognition in LLM Multi-Agent Systems

arXiv cs.AI / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper reports a large-scale empirical study (1.5M+ interactions) of LLM multi-agent “societies,” focusing on how coordination dynamics change with task, topology, and scale.
  • It identifies three coupled power laws: coordination occurs via heavy-tailed reasoning cascades, elite-like concentration emerges through preferential attachment, and extreme coordination events become more frequent as system size increases.
  • The study links these effects to a single structural cause—an integration bottleneck—where coordination can scale with system size but consolidation (integration) does not, yielding large yet weakly integrated reasoning.
  • To address this, the authors propose Deficit-Triggered Integration (DTI), which boosts integration when imbalance is detected and improves performance specifically in regimes where coordination breaks down.
  • Overall, the work reframes scalable multi-agent intelligence as a measurable function of coordination structure and provides a quantitative framework for future improvements.

Abstract

Large Language Model (LLM) multi-agent systems are increasingly deployed as interacting agent societies, yet scaling these systems often yields diminishing or unstable returns, the causes of which remain poorly understood. We present the first large-scale empirical study of coordination dynamics in LLM-based multi-agent systems, introducing an atomic event-level formulation that reconstructs reasoning as cascades of coordination. Analyzing over 1.5 Million interactions across tasks, topologies, and scales, we uncover three coupled laws: coordination follows heavy-tailed cascades, concentrates via preferential attachment into intellectual elites, and produces increasingly frequent extreme events as system size grows. We show that these effects are coupled through a single structural mechanism: an integration bottleneck, in which coordination expansion scales with system size while consolidation does not, producing large but weakly integrated reasoning processes. To test this mechanism, we introduce Deficit-Triggered Integration (DTI), which selectively increases integration under imbalance. DTI improves performance precisely where coordination fails, without suppressing large-scale reasoning. Together, our results establish quantitative laws of collective cognition and identify coordination structure as a fundamental, previously unmeasured axis for understanding and improving scalable multi-agent intelligence.