Span-Level Machine Translation Meta-Evaluation

arXiv cs.CL / 3/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes span-level precision, recall, and F-score for MT error detection and shows that different implementations can yield substantially different rankings.
  • It demonstrates that many widely-used evaluation techniques are unsuitable for evaluating MT error detection.
  • It proposes a new meta-evaluation approach called match with partial overlap and partial credit (MPP) using micro-averaging, and provides public code for its use.
  • It uses MPP to assess the current state-of-the-art in MT error detection, offering a more robust benchmark for future work.

Abstract

Machine Translation (MT) and automatic MT evaluation have improved dramatically in recent years, enabling numerous novel applications. Automatic evaluation techniques have evolved from producing scalar quality scores to precisely locating translation errors and assigning them error categories and severity levels. However, it remains unclear how to reliably measure the evaluation capabilities of auto-evaluators that do error detection, as no established technique exists in the literature. This work investigates different implementations of span-level precision, recall, and F-score, showing that seemingly similar approaches can yield substantially different rankings, and that certain widely-used techniques are unsuitable for evaluating MT error detection. We propose "match with partial overlap and partial credit" (MPP) with micro-averaging as a robust meta-evaluation strategy and release code for its use publicly. Finally, we use MPP to assess the state of the art in MT error detection.