AI Navigate

Cross-Lingual LLM-Judge Transfer via Evaluation Decomposition

arXiv cs.CL / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a decomposition-based evaluation framework built around a Universal Criteria Set (UCS) to enable multilingual LLM evaluation without requiring target-language annotations.
  • UCS provides a language-agnostic set of evaluation dimensions and an interpretable intermediate representation that supports cross-lingual transfer with minimal supervision.
  • Experiments across multiple faithfulness tasks and model backbones show consistent improvements over strong baselines without target-language judgments.
  • The approach reduces annotation costs and enables scalable multilingual evaluation, potentially influencing evaluation standards for multilingual AI deployments.

Abstract

As large language models are increasingly deployed across diverse real-world applications, extending automated evaluation beyond English has become a critical challenge. Existing evaluation approaches are predominantly English-focused, and adapting them to other languages is hindered by the scarcity and cost of human-annotated judgments in most languages. We introduce a decomposition-based evaluation framework built around a Universal Criteria Set (UCS). UCS consists of a shared, language-agnostic set of evaluation dimensions, producing an interpretable intermediate representation that supports cross-lingual transfer with minimal supervision. Experiments on multiple faithfulness tasks across languages and model backbones demonstrate consistent improvements over strong baselines without requiring target-language annotations.