All Languages Matter: Understanding and Mitigating Language Bias in Multilingual RAG
arXiv cs.CL / 4/23/2026
📰 NewsModels & Research
Key Points
- The paper finds that multilingual RAG (mRAG) systems exhibit language bias in the reranking stage, tending to prioritize English and the query’s native language over other languages.
- Using an estimated “oracle evidence” analysis, the authors quantify a sizable performance gap between existing rerankers and the theoretical upper bound of what reranking could achieve.
- They identify a key distribution mismatch: optimal answers require evidence dispersed across multiple languages, but current systems suppress these “answer-critical” documents.
- To address this, the authors propose LAURA (Language-Agnostic Utility-driven Reranker Alignment), which aligns multilingual evidence ranking with downstream generative usefulness.
- Experiments across multiple languages and generation models show that LAURA reduces language bias and yields consistent mRAG performance improvements.
Related Articles

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to

GPT Image 2 vs DALL-E 3: What Actually Changed in OpenAI's New Image Model
Dev.to

AI Tutor for Science Students — Physics Chemistry Biology Solved by AI
Dev.to