Location Not Found: Exposing Implicit Local and Global Biases in Multilingual LLMs

arXiv cs.CL / 4/22/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that while multilingual LLMs have improved fluency across languages, they can still exhibit biased behavior because knowledge and norms may transfer across languages.
  • It introduces LocQA, a multilingual benchmark with 2,156 locale-ambiguous questions across 12 languages, designed so the question text contains no locale cues besides the querying language.
  • Using LocQA, the authors evaluate 32 models and find two structural bias types: a global US-locale bias across languages and a stronger version of that bias after instruction tuning.
  • The study also finds intra-lingual bias, where models effectively act like “demographic probability engines,” preferring locales associated with larger populations when multiple locales are plausible.
  • The results suggest LocQA can be used to measure implicit priors and to assess how different training phases affect bias behavior in multilingual LLMs.

Abstract

Multilingual large language models (LLMs) have minimized the fluency gap between languages. This advancement, however, exposes models to the risk of biased behavior, as knowledge and norms may propagate across languages. In this work, we aim to quantify models' inter- and intra-lingual biases, via their ability to answer locale-ambiguous questions. To this end, we present LocQA, a test set containing 2,156 questions in 12 languages, referring to various locale-dependent facts such as laws, dates, and measurements. The questions do not contain indications of the locales they relate to, other than the querying language itself. LLMs' responses to LocQA locale-ambiguous questions thus reveal models' implicit priors. We used LocQA to evaluate 32 models, and detected two types of structural biases. Inter-lingually, we show a global bias towards answers relevant to the US-locale, even when models are asked in languages other than English. Moreover, we discovered that this global bias is exacerbated in models that underwent instruction tuning, compared to their base counterparts. Intra-lingually, we show that when multiple locales are relevant for the same language, models act as demographic probability engines, prioritizing locales with larger populations. Taken together, insights from LocQA may help in shaping LLMs' desired local behavior, and in quantifying the impact of various training phases on different kinds of biases.