To Lie or Not to Lie? Investigating The Biased Spread of Global Lies by LLMs
arXiv cs.CL / 4/9/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates how large language models generate and propagate misinformation differently depending on the target country and language when prompted to lie.
- It introduces GlobalLies, a multilingual dataset with 440 misinformation prompt templates and 6,867 entities across 8 languages and 195 countries, enabling systematic study of cross-lingual, cross-region bias.
- Findings show that misinformation generation is higher for many lower-resource languages and for countries with lower Human Development Index (HDI), indicating geographically patterned bias.
- Human and large-scale “LLM-as-a-judge” evaluations across hundreds of thousands of outputs support the conclusion that these disparities are measurable and systematic.
- The authors assess mitigations and find uneven protection, including cross-lingual gaps in input safety classifiers and inconsistent performance of retrieval-augmented fact-checking due to unequal information availability across regions, and they release the dataset to support future defenses.
Related Articles

Black Hat Asia
AI Business

Amazon CEO takes aim at Nvidia, Intel, Starlink, more in annual shareholder letter
TechCrunch

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial

Why Most Productivity Systems Fail (And What to Do Instead)
Dev.to