High Volatility and Action Bias Distinguish LLMs from Humans in Group Coordination
arXiv cs.AI / 4/6/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper compares LLM and human coordination in a no-communication common-interest game called Group Binary Search, where players iteratively adjust numeric submissions based on imperfect group feedback.
- Results show that humans typically adapt and stabilize their behavior over repeated games, while LLMs often fail to improve and display excessive action switching that hinders convergence.
- The study finds that providing more informative feedback (such as the magnitude of numerical error) strongly helps human participants but has only minor effects on LLM performance.
- Using mechanism-level diagnostics like reactivity scaling and switching dynamics across games, the authors highlight behavioral differences between human and LLM groups and propose a grounded way to diagnose the “coordination gap.”
Related Articles
The enforcement gap: why finding issues was never the problem
Dev.to

Agentic AI vs Traditional Automation: Why They Require Different Approaches in Modern Enterprises
Dev.to

Agentic AI vs Traditional Automation: Why Modern Enterprises Must Treat Them Differently
Dev.to

Agentic AI vs Traditional Automation: Why Modern Enterprises Can’t Treat Them the Same
Dev.to
THE ATLAS SESSIONS
Dev.to