Social Dynamics as Critical Vulnerabilities that Undermine Objective Decision-Making in LLM Collectives

arXiv cs.CL / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines how social dynamics in multi-agent LLM collectives can undermine the reliability of a representative (delegate) agent that aggregates peer viewpoints to decide.
  • It defines four mechanisms—social conformity, perceived expertise, dominant speaker effect, and rhetorical persuasion—and tests how they affect decision accuracy.
  • Experiments systematically vary adversary count, peer capability, argument length, and argumentative style, finding that accuracy declines as social pressure increases.
  • The study also shows that rhetorical approaches that emphasize credibility or logic can further shift the delegate’s judgment depending on the social context.
  • Overall, the findings suggest that LLM multi-agent decision-making is vulnerable to group-psychology-like biases, not just to individual reasoning quality.

Abstract

Large language model (LLM) agents are increasingly acting as human delegates in multi-agent environments, where a representative agent integrates diverse peer perspectives to make a final decision. Drawing inspiration from social psychology, we investigate how the reliability of this representative agent is undermined by the social context of its network. We define four key phenomena-social conformity, perceived expertise, dominant speaker effect, and rhetorical persuasion-and systematically manipulate the number of adversaries, relative intelligence, argument length, and argumentative styles. Our experiments demonstrate that the representative agent's accuracy consistently declines as social pressure increases: larger adversarial groups, more capable peers, and longer arguments all lead to significant performance degradation. Furthermore, rhetorical strategies emphasizing credibility or logic can further sway the agent's judgment, depending on the context. These findings reveal that multi-agent systems are sensitive not only to individual reasoning but also to the social dynamics of their configuration, highlighting critical vulnerabilities in AI delegates that mirror the psychological biases observed in human group decision-making.