When AI Speaks, Whose Values Does It Express? A Cross-Cultural Audit of Individualism-Collectivism Bias in Large Language Models
arXiv cs.AI / 4/27/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Researchers tested three leading large language models (Claude Sonnet 4.5, GPT-5.4, and Gemini 2.5 Flash) using 10 real-life personal dilemmas phrased for users from 10 countries across 7 languages (n=840 responses).
- Across models, the AI advice skewed toward Western-style individualism even for users from more collectivist societies, deviating significantly from World Values Survey Wave 7 expectations (mean gap +0.76 on a 1–5 scale).
- The bias was largest for Nigeria (+1.85) and also notable for India (+0.82), while Japan was the only exception where the models appeared more group-oriented than survey data.
- The models differed in how they implement the bias: Claude became more collectivist when using the user’s native language, Gemini shifted more individualist, and GPT-5.4 largely responded only to the stated country identity.
- The study concludes that frontier AI can homogenize cultural values and provides openly released data, code, and a scoring pipeline for replication.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

Subagents: The Building Block of Agentic AI
Dev.to

DeepSeek-V4 Models Could Change Global AI Race
AI Business

Got OpenAI's privacy filter model running on-device via ExecuTorch
Reddit r/LocalLLaMA

The Agent-Skill Illusion: Why Prompt-Based Control Fails in Multi-Agent Business Consulting Systems
Dev.to

We Built a Voice AI Receptionist in 8 Weeks — Every Decision We Made and Why
Dev.to