Strategic Algorithmic Monoculture:Experimental Evidence from Coordination Games

arXiv cs.AI / 4/13/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes how AI agents coordinate in multi-agent settings by separating “primary” algorithmic monoculture (baseline action similarity) from “strategic” monoculture (similarity that changes in response to incentives).
  • Using an experimental design that isolates these forces, the study tests both human participants and large language model (LLM) agents in coordination games.
  • Results show that LLMs start with high baseline action similarity, similar to humans, and then adjust similarity when coordination incentives change.
  • However, when rewards favor divergence, LLMs are less effective than humans at sustaining heterogeneity over time, indicating limitations in maintaining strategic non-alignment.
  • The findings suggest that incentive-driven adaptation can emerge in LLM coordination, but behavioral diversity under divergence incentives remains a key weakness.

Abstract

AI agents increasingly operate in multi-agent environments where outcomes depend on coordination. We distinguish primary algorithmic monoculture -- baseline action similarity -- from strategic algorithmic monoculture, whereby agents adjust similarity in response to incentives. We implement a simple experimental design that cleanly separates these forces, and deploy it on human and large language model (LLM) subjects. LLMs exhibit high levels of baseline similarity (primary monoculture) and, like humans, they regulate it in response to coordination incentives (strategic monoculture). While LLMs coordinate extremely well on similar actions, they lag behind humans in sustaining heterogeneity when divergence is rewarded.