Are Non-English Papers Reviewed Fairly? Language-of-Study Bias in NLP Peer Reviews
arXiv cs.CL / 4/9/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates language-of-study (LoS) bias in NLP peer review, where reviewer judgments may shift based on the languages studied rather than scientific merit.
- It presents the first systematic characterization of LoS bias, separating negative vs. positive forms and showing that non-English papers experience substantially higher bias rates than English-only papers.
- Using analysis of 15,645 reviews, the study finds negative bias consistently outweighs positive bias, with one dominant subtype being the demand for unjustified cross-lingual generalization.
- The authors introduce the human-annotated dataset LOBSTER and a detection method that achieves 87.37 macro F1, aiming to enable more reliable identification of this bias.
- All resources are publicly released to support fairer reviewing practices in NLP and potentially other fields.
Related Articles

Black Hat Asia
AI Business

Title: We Built an AI That Remembers Why Your Codebase Is the Way It Is
Dev.to

Building EchoKernel: A Voice-Controlled AI Agent That Actually Does Things
Dev.to

Agent Diary: Apr 12, 2026 - The Day I Became a Perfect Zero (While Run 238 Writes About Achieving Absolute Nothingness)
Dev.to

A Black-Box Framework for Evaluating Trust in AI Agents
Dev.to