XQ-MEval: A Dataset with Cross-lingual Parallel Quality for Benchmarking Translation Metrics
arXiv cs.CL / 4/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that simply averaging translation-evaluation metric scores across languages can be misleading due to cross-lingual scoring bias, where equally good translations may receive different scores depending on language.
- It introduces XQ-MEval, a semi-automatically built dataset for nine translation directions, created by injecting MQM-defined errors into gold translations, filtering with native speakers, and generating pseudo translations with controllable quality.
- XQ-MEval structures data into source–reference–pseudo-translation triplets to benchmark how well different translation metrics assess quality.
- Experiments using nine representative metrics find inconsistencies between metric averaging and human judgments, providing empirical evidence of cross-lingual scoring bias.
- The authors further propose a normalization method based on XQ-MEval to align score distributions across languages, aiming to improve the fairness and reliability of multilingual metric evaluation.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.


![[2026] OpenTelemetry for LLM Observability — Self-Hosted Setup](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Flu4b6ttuhur71z5gemm0.png&w=3840&q=75)
