Cross-Preference Learning for Sentence-Level and Context-Aware Machine Translation
arXiv cs.CL / 3/27/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that context-aware machine translation (document-level signals) often fails to reliably beat sentence-level MT because the usefulness of context varies unevenly across sentences.
- It introduces Cross-Preference Learning (CPL), a preference-based training framework that explicitly models complementary strengths between sentence-level and context-aware MT.
- CPL incorporates both intra-condition and cross-condition preferences into a single optimization objective, providing supervision on when and how contextual information improves translation quality.
- Experiments on several public context-aware MT benchmarks using multiple models (Qwen3-4B, Qwen3-8B, Llama-3-8B) show consistent gains in translation quality and robustness.
- The improvements reportedly come without any architectural changes, suggesting CPL is a training objective upgrade that can generalize across model types.
広告
Related Articles

STADLER reshapes knowledge work at a 230-year-old company
OpenAI Blog

AI Research Is Getting Harder to Separate From Geopolitics
Wired
Sparse Federated Representation Learning for circular manufacturing supply chains with zero-trust governance guarantees
Dev.to

Meet Claude Mythos: Leaked Anthropic post reveals the powerful upcoming model
Reddit r/artificial

**Optimizing AI Agents: A Little-Known Technique to Improve
Dev.to