AI Navigate

Large Language Models as Annotators for Machine Translation Quality Estimation

arXiv cs.CL / 3/12/2026

💬 OpinionModels & Research

Key Points

  • LLMs are proposed as generators of MQM-style annotations to train MT quality estimation models, addressing the high inference costs of using LLMs directly.
  • The paper introduces a simplified MQM scheme limited to top-level categories and a GPT-4o-based prompt framework named PPbMQM.
  • Results show the LLM-generated annotations correlate well with human annotations and that training COMET on them yields competitive segment-level QE performance for Chinese-English and English-German.
  • This approach enables more cost-effective MTQE pipelines by leveraging LLMs for annotation rather than inference during deployment.

Abstract

Large Language Models (LLMs) have demonstrated excellent performance on Machine Translation Quality Estimation (MTQE), yet their high inference costs make them impractical for direct application. In this work, we propose applying LLMs to generate MQM-style annotations for training a COMET model: following Fernandes et al. (2023), we reckon that segment-level annotations provide a strong rationale for LLMs and are key to good segment-level QE. We propose a simplified MQM scheme, mostly restricted to top-level categories, to guide LLM selection. We present a systematic approach for the development of a GPT-4o-based prompt, called PPbMQM (Prompt-Pattern-based-MQM). We show that the resulting annotations correlate well with human annotations and that training COMET on them leads to competitive performance on segment-level QE for Chinese-English and English-German.