Translation or Recitation? Calibrating Evaluation Scores for Machine Translation of Extremely Low-Resource Languages

arXiv cs.LG / 3/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that reported performance in extremely low-resource machine translation (MT) is hard to compare because benchmark results may reflect evaluation artifacts rather than true methodological gains.
  • It introduces the FRED Difficulty Metrics—Fertility Ratio (F), Retrieval Proxy (R), Pre-training Exposure (E), and Corpus Diversity (D)—to contextualize evaluation scores using dataset-intrinsic properties.
  • The authors find that a substantial share of variability across results can be explained by train-test overlap and pre-training exposure, implying that “better scores” may not directly indicate stronger model capability.
  • They show that some extinct and non-Latin indigenous languages face poor tokenization coverage (high fertility), revealing a fundamental limitation when transferring models trained on high-resource languages with mismatched vocabularies.
  • The work recommends publishing these difficulty indices alongside performance metrics to improve transparency and support more reliable evaluation of cross-lingual transfer in the XLR MT community.

Abstract

The landscape of extremely low-resource machine translation (MT) is characterized by perplexing variability in reported performance, often making results across different language pairs difficult to contextualize. For researchers focused on specific language groups -- such as ancient languages -- it is nearly impossible to determine if breakthroughs reported in other contexts (e.g., native African or American languages) result from superior methodologies or are merely artifacts of benchmark collection. To address this problem, we introduce the FRED Difficulty Metrics, which include the Fertility Ratio (F), Retrieval Proxy (R), Pre-training Exposure (E), and Corpus Diversity (D) and serve as dataset-intrinsic metrics to contextualize reported scores. These metrics reveal that a significant portion of result variability is explained by train-test overlap and pre-training exposure rather than model capability. Additionally, we identify that some languages -- particularly extinct and non-Latin indigenous languages -- suffer from poor tokenization coverage (high token fertility), highlighting a fundamental limitation of transferring models from high-resource languages that lack a shared vocabulary. By providing these indices alongside performance scores, we enable more transparent evaluation of cross-lingual transfer and provide a more reliable foundation for the XLR MT community.