AI Navigate

Is Human Annotation Necessary? Iterative MBR Distillation for Error Span Detection in Machine Translation

arXiv cs.CL / 3/16/2026

📰 NewsModels & Research

Key Points

  • The paper proposes Iterative MBR Distillation for Error Span Detection (ESD) in machine translation, a self-evolution framework that uses Minimum Bayes Risk decoding to locate translation errors without human annotations.
  • It employs an off-the-shelf large language model to generate pseudo-labels, removing the need for costly human-annotated data.
  • Experiments on the WMT Metrics Shared Task datasets show that models trained only on these self-generated labels outperform unadapted baselines and supervised models trained on human data at the system and span levels, while remaining competitive at the sentence level.
  • The approach offers a scalable alternative for MT evaluation by reducing annotation requirements and improving span-level error detection.

Abstract

Error Span Detection (ESD) is a crucial subtask in Machine Translation (MT) evaluation, aiming to identify the location and severity of translation errors. While fine-tuning models on human-annotated data improves ESD performance, acquiring such data is expensive and prone to inconsistencies among annotators. To address this, we propose a novel self-evolution framework based on Minimum Bayes Risk (MBR) decoding, named Iterative MBR Distillation for ESD, which eliminates the reliance on human annotations by leveraging an off-the-shelf LLM to generate pseudo-labels.Extensive experiments on the WMT Metrics Shared Task datasets demonstrate that models trained solely on these self-generated pseudo-labels outperform both unadapted base model and supervised baselines trained on human annotations at the system and span levels, while maintaining competitive sentence-level performance.