Cooperative Profiles Predict Multi-Agent LLM Team Performance in AI for Science Workflows
arXiv cs.CL / 4/23/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The study benchmarks 35 open-weight LLMs by having them participate in six behavioral-economics games that test different cooperation mechanisms under shared constraints.
- It finds that “cooperative profiles” derived from these game behaviors strongly and robustly predict how well LLM agents perform together on AI-for-Science workflows.
- In particular, models that learn to coordinate well in the games and that favor multiplicative (team-amplifying) strategies outperform greedy approaches in producing scientific reports.
- The predictive relationship remains after controlling for multiple factors, suggesting that cooperative disposition is a measurable, distinct property of LLMs rather than just general capability.
- The authors propose a behavioral-games framework as a fast, low-cost diagnostic tool to screen for cooperative “fitness” before deploying expensive multi-agent systems.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

10 AI Tools Every Developer Should Try in 2026
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to