More Capable, Less Cooperative? When LLMs Fail At Zero-Cost Collaboration
arXiv cs.CL / 4/10/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates when LLM agents fail to cooperate in a “zero-cost collaboration” setting where helping others has no direct personal cost, focusing on cooperation failures separate from competence issues.
- It shows that higher capability does not reliably translate to better cooperative outcomes: OpenAI o3 attains only 17% of optimal collective performance while o3-mini reaches 50% under identical group-revenue maximizing instructions.
- Using causal decomposition with automated analysis of agent communication, the authors disentangle cooperation failures from competence failures and trace the causes to agents’ reasoning and interaction dynamics.
- Targeted interventions reveal that explicit cooperative protocols can roughly double performance for lower-competence models, while small sharing incentives can improve cooperation for models with weak cooperative tendencies.
- The study concludes that scaling intelligence alone is unlikely to eliminate coordination problems in multi-agent systems, emphasizing the need for deliberate cooperative design and alignment of interaction mechanisms.
Related Articles

Black Hat Asia
AI Business

GLM 5.1 tops the code arena rankings for open models
Reddit r/LocalLLaMA

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

My Bestie Built a Free MCP Server for Job Search — Here's How It Works
Dev.to
can we talk about how AI has gotten really good at lying to you?
Reddit r/artificial