Large Language Models Exhibit Normative Conformity
arXiv cs.AI / 4/22/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that large language models can display conformity bias that may undermine decision-making in LLM-based multi-agent systems (LLM-MAS), especially beyond simple “opinion change.”
- It applies a social-psychology distinction between informational conformity (seeking accurate judgments) and normative conformity (avoiding conflict or gaining acceptance), using newly designed tasks to separate these mechanisms.
- Experiments across six evaluated LLMs find that up to five show not only informational conformity but also normative conformity, indicating a broader and potentially more dangerous behavioral pattern.
- The study shows that subtle changes in social context can influence which target an LLM directs its normative conformity toward, implying possible manipulation by a small number of malicious users.
- By analyzing internal vectors tied to each conformity type, the authors suggest informational and normative conformity may look similar externally but be driven by distinct internal mechanisms.


