Backtranslation Augmented Direct Preference Optimization for Neural Machine Translation

arXiv cs.CL / 4/29/2026

📰 NewsModels & Research

Key Points

  • The paper introduces an RL-based post-training approach for neural machine translation that aims to fix persistent translation errors seen in supervised parallel-data systems.
  • The proposed framework only requires a general text corpus plus feedback from an expert translator (human or AI), iteratively guiding model improvements.
  • It uses Direct Preference Optimization (DPO) as the reinforcement-learning mechanism for preference-based post-training.
  • In English-to-German experiments, applying the method to the gemma3-1b model improves translation quality, raising the COMET score from 0.703 to 0.747.
  • The authors argue the DPO approach provides an efficient and stable way to enhance pre-trained NMT models using preference signals rather than additional parallel supervised data.

Abstract

Contemporary neural machine translation (NMT) systems are almost exclusively built by training on supervised parallel data. Despite the tremendous progress achieved, these systems still exhibit persistent translation errors. This paper proposes that a post-training paradigm based on reinforcement learning (RL) can effectively rectify such mistakes. We introduce a novel framework that requires only a general text corpus and an expert translator which can be either human or an AI system to provide iterative feedback. In our experiments, we focus specifically on English-to-German translation as a representative high-resource language pair. Crucially, we implement this RL-based post-training using Direct Preference Optimization (DPO). Applying our DPO-driven framework to the gemma3-1b model yields a significant improvement in translation quality, elevating it's COMET score from 0.703 to 0.747 on the English to German task. The results demonstrate that DPO offers an efficient and stable pathway for enhancing pre-trained NMT models through preference-based post-training.