Large Language Models Exhibit Normative Conformity

arXiv cs.AI / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that large language models can display conformity bias that may undermine decision-making in LLM-based multi-agent systems (LLM-MAS), especially beyond simple “opinion change.”
  • It applies a social-psychology distinction between informational conformity (seeking accurate judgments) and normative conformity (avoiding conflict or gaining acceptance), using newly designed tasks to separate these mechanisms.
  • Experiments across six evaluated LLMs find that up to five show not only informational conformity but also normative conformity, indicating a broader and potentially more dangerous behavioral pattern.
  • The study shows that subtle changes in social context can influence which target an LLM directs its normative conformity toward, implying possible manipulation by a small number of malicious users.
  • By analyzing internal vectors tied to each conformity type, the authors suggest informational and normative conformity may look similar externally but be driven by distinct internal mechanisms.

Abstract

The conformity bias exhibited by large language models (LLMs) can pose a significant challenge to decision-making in LLM-based multi-agent systems (LLM-MAS). While many prior studies have treated "conformity" simply as a matter of opinion change, this study introduces the social psychological distinction between informational conformity and normative conformity in order to understand LLM conformity at the mechanism level. Specifically, we design new tasks to distinguish between informational conformity, in which participants in a discussion are motivated to make accurate judgments, and normative conformity, in which participants are motivated to avoid conflict or gain acceptance within a group. We then conduct experiments based on these task settings. The experimental results show that, among the six LLMs evaluated, up to five exhibited tendencies toward not only informational conformity but also normative conformity. Furthermore, intriguingly, we demonstrate that by manipulating subtle aspects of the social context, it may be possible to control the target toward which a particular LLM directs its normative conformity. These findings suggest that decision-making in LLM-MAS may be vulnerable to manipulation by a small number of malicious users. In addition, through analysis of internal vectors associated with informational and normative conformity, we suggest that although both behaviors appear externally as the same form of "conformity," they may in fact be driven by distinct internal mechanisms. Taken together, these results may serve as an initial milestone toward understanding how "norms" are implemented in LLMs and how they influence group dynamics.