Location Not Found: Exposing Implicit Local and Global Biases in Multilingual LLMs
arXiv cs.CL / 4/22/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that while multilingual LLMs have improved fluency across languages, they can still exhibit biased behavior because knowledge and norms may transfer across languages.
- It introduces LocQA, a multilingual benchmark with 2,156 locale-ambiguous questions across 12 languages, designed so the question text contains no locale cues besides the querying language.
- Using LocQA, the authors evaluate 32 models and find two structural bias types: a global US-locale bias across languages and a stronger version of that bias after instruction tuning.
- The study also finds intra-lingual bias, where models effectively act like “demographic probability engines,” preferring locales associated with larger populations when multiple locales are plausible.
- The results suggest LocQA can be used to measure implicit priors and to assess how different training phases affect bias behavior in multilingual LLMs.
![AI TikTok Marketing for Pet Brands [2026 Guide]](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Fj35r9qm34d68qf2gq7no.png&w=3840&q=75)


