Round-Trip Translation Reveals What Frontier Multilingual Benchmarks Miss

arXiv cs.CL / 4/15/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current frontier multilingual benchmarks often end up measuring mathematical reasoning and factual recall rather than true multilingual proficiency.
  • It reports that “thinking” model variants can score much higher than “instruct” variants on these structured multilingual evaluations, while performing worse on real-world multilingual tasks like LMArena.
  • To better assess multilingual capability, the authors propose round-trip translation (source → target → back to source) and use semantic gaps between the original and the final text as an error signal.
  • The approach is shown to correlate almost perfectly (Spearman ρ = 0.94) with user ratings on LMArena, while requiring no human reference translations and avoiding reliance on a stronger multilingual judge.
  • The authors release a new benchmark, Lost in Translation (LiT), designed to stress multilingual generation across widely spoken languages.

Abstract

Multilingual benchmarks guide the development of frontier models. Yet multilingual evaluations reported by frontier models are structured similar to popular reasoning and knowledge benchmarks, but across many languages. We show such benchmarks, and consequently multilingual evaluations, measure mathematical reasoning and factual recall, not multilingual proficiency. For example, thinking variants dramatically outperform instruct variants on these benchmarks, yet often perform worse on real-world multilingual tasks, such as LMArena. We propose a simple alternative: evaluate multilingual capability via round-trip translation. Given text in a source language, translate it to a target language and back; semantic gaps between the original and result expose failures in multilingual generation capabilities. Round-trip translation correlates almost perfectly (\r{ho} = 0.94) with user ratings on LMArena with our benchmark, requires no human reference translations, and does not require a more capable multilingual judge than tested models. Lastly, we introduce Lost in Translation (LiT), a challenging round-trip translation benchmark spanning widely spoken languages worldwide, for realistic evaluation of multilingual frontier models.