A Multimodal Dataset for Visually Grounded Ambiguity in Machine Translation

arXiv cs.CL / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that existing multimodal machine translation (MMT) ambiguity benchmarks have significant data-quality problems and do not match real translation scenarios well.
  • It introduces VIDA (Visually-Dependent Ambiguity), a curated dataset of 2,500 instances where correctly resolving an annotated ambiguous source span requires visual evidence.
  • The authors propose Disambiguation-Centric Metrics that use an LLM-as-a-judge span-level classifier to check whether ambiguous expressions are resolved correctly.
  • Experiments on two state-of-the-art large vision-language models compare vanilla inference, supervised fine-tuning (SFT), and chain-of-thought SFT (CoT-SFT), finding that CoT-SFT improves disambiguation accuracy more consistently, particularly on out-of-distribution subsets.

Abstract

Ambiguity resolution is a key challenge in multimodal machine translation (MMT), where models must genuinely leverage visual input to map an ambiguous expression to its intended meaning. Although prior work has proposed disambiguation-oriented benchmarks that provide supportive evidence for the role of vision, we observe substantial issues in data quality and a mismatch with translation scenarios. Moreover, existing ambiguity-oriented evaluations are not well suited to broader ambiguity types in open-ended translation. To address these limitations, we present VIDA (Visually-Dependent Ambiguity), a dataset of 2,500 carefully curated instances in which resolving an annotated ambiguous source span requires visual evidence. We further propose Disambiguation-Centric Metrics that use an LLM-as-a-judge classifier to verify whether annotated ambiguous expressions are resolved correctly at the span level. Experiments with two state-of-the-art Large Vision Language Models under vanilla inference, supervised fine-tuning (SFT), and our chain-of-thought SFT (CoT-SFT) show that while SFT improves overall translation quality, CoT-SFT yields more consistent gains in disambiguation accuracy, especially on out-of-distribution subsets, indicating a stronger generalization for resolving diverse ambiguity types.