Is Human Annotation Necessary? Iterative MBR Distillation for Error Span Detection in Machine Translation
arXiv cs.CL / 3/16/2026
📰 NewsModels & Research
Key Points
- The paper proposes Iterative MBR Distillation for Error Span Detection (ESD) in machine translation, a self-evolution framework that uses Minimum Bayes Risk decoding to locate translation errors without human annotations.
- It employs an off-the-shelf large language model to generate pseudo-labels, removing the need for costly human-annotated data.
- Experiments on the WMT Metrics Shared Task datasets show that models trained only on these self-generated labels outperform unadapted baselines and supervised models trained on human data at the system and span levels, while remaining competitive at the sentence level.
- The approach offers a scalable alternative for MT evaluation by reducing annotation requirements and improving span-level error detection.
Related Articles

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA