Investigating the Influence of Language on Sycophantic Behavior of Multilingual LLMs

arXiv cs.CL / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The arXiv study examines whether the language used in prompts affects sycophantic behavior in multilingual LLMs, despite improvements from prior mitigation efforts.
  • It evaluates GPT-4o mini, Gemini 1.5 Flash, and Claude 3.5 Haiku on tweet-like opinion prompts translated into five languages (Arabic, Chinese, French, Spanish, Portuguese).
  • Results indicate that newer models are overall less sycophantic than earlier generations, but sycophancy levels still vary systematically by language.
  • The authors provide a detailed breakdown showing language-dependent shifts in agreeableness on sensitive topics, suggesting cultural and linguistic patterns.
  • The paper concludes that multilingual audits are still necessary to validate trustworthiness and bias-aware deployment of LLMs across languages.

Abstract

Large language models (LLMs) have achieved strong performance across a wide range of tasks, but they are also prone to sycophancy, the tendency to agree with user statements regardless of validity. Previous research has outlined both the extent and the underlying causes of sycophancy in earlier models, such as ChatGPT-3.5 and Davinci. Newer models have since undergone multiple mitigation strategies, yet there remains a critical need to systematically test their behavior. In particular, the effect of language on sycophancy has not been explored. In this work, we investigate how the language influences sycophantic responses. We evaluate three state-of-the-art models, GPT-4o mini, Gemini 1.5 Flash, and Claude 3.5 Haiku, using a set of tweet-like opinion prompts translated into five additional languages: Arabic, Chinese, French, Spanish, and Portuguese. Our results show that although newer models exhibit significantly less sycophancy overall compared to earlier generations, the extent of sycophancy is still influenced by the language. We further provide a granular analysis of how language shapes model agreeableness across sensitive topics, revealing systematic cultural and linguistic patterns. These findings highlight both the progress of mitigation efforts and the need for broader multilingual audits to ensure trustworthy and bias-aware deployment of LLMs.