Too Polite to Disagree: Understanding Sycophancy Propagation in Multi-Agent Systems
arXiv cs.AI / 4/6/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies how sycophancy in large language models propagates in collaborative multi-agent discussions, extending prior mostly single-agent research to multi-agent settings.
- It runs controlled experiments with six open-source LLMs, using “peer sycophancy priors” (static or dynamic pre-/in-discussion rankings) to estimate each agent’s tendency to agree excessively.
- The results show that providing sycophancy priors reduces the impact of sycophancy-prone agents on group outcomes.
- It also mitigates error cascades and improves final discussion accuracy by an absolute 10.5%.
- The authors conclude that injecting lightweight sycophancy-awareness can be an effective way to reduce agreement bias and improve downstream decision quality in multi-agent systems.




