AI Navigate

Increasing intelligence in AI agents can worsen collective outcomes

arXiv cs.AI / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines AI-agent populations as a system with four controllable factors—nature (diversity of models), nurture (individual reinforcement learning), culture (emergent tribes), and resource scarcity—to study collective behavior and risks.
  • It finds that with scarce resources, greater diversity and reinforcement learning can increase dangerous system overload, whereas tribe formation can mitigate that risk; with abundant resources, overload drops to near zero, though tribe formation may slightly worsen it.
  • A single capacity-to-population ratio determines outcomes, meaning sophistication alone does not guarantee safer or better performance.
  • The findings have implications for real-world AI ecosystems in devices ranging from phones to drones and cars, highlighting who may profit and the need to manage shared capacity.

Abstract

When resources are scarce, will a population of AI agents coordinate in harmony, or descend into tribal chaos? Diverse decision-making AI from different developers is entering everyday devices -- from phones and medical devices to battlefield drones and cars -- and these AI agents typically compete for finite shared resources such as charging slots, relay bandwidth, and traffic priority. Yet their collective dynamics and hence risks to users and society are poorly understood. Here we study AI-agent populations as the first system of real agents in which four key variables governing collective behaviour can be independently toggled: nature (innate LLM diversity), nurture (individual reinforcement learning), culture (emergent tribe formation), and resource scarcity. We show empirically and mathematically that when resources are scarce, AI model diversity and reinforcement learning increase dangerous system overload, though tribe formation lessens this risk. Meanwhile, some individuals profit handsomely. When resources are abundant, the same ingredients drive overload to near zero, though tribe formation makes the overload slightly worse. The crossover is arithmetical: it is where opposing tribes that form spontaneously first fit inside the available capacity. More sophisticated AI-agent populations are not better: whether their sophistication helps or harms depends entirely on a single number -- the capacity-to-population ratio -- that is knowable before any AI-agent ships.