Beyond LLM-as-a-Judge: Deterministic Metrics for Multilingual Generative Text Evaluation

arXiv cs.LG / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that using LLMs as automated judges for multilingual text evaluation is expensive and hard to reproduce because of sensitivity to prompts, language, and aggregation choices.
  • It introduces OmniScore, a family of deterministic, learned evaluation metrics built to approximate LLM-judge behavior using small (<1B parameter) models for faster, more consistent scoring.
  • The approach was trained with large-scale synthetic supervision (~564k instances across 107 languages) and validated with 8,617 manually annotated examples, covering multiple evaluation paradigms.
  • OmniScore supports multi-dimensional scoring in reference-based, source-grounded, and hybrid settings, and is tested on QA, translation, and summarization tasks in six languages.
  • The authors report that lightweight deterministic metrics can serve as a scalable alternative to frontier LLM judges and release the models and datasets via Hugging Face.

Abstract

While Large Language Models (LLMs) are increasingly adopted as automated judges for evaluating generated text, their outputs are often costly, and highly sensitive to prompt design, language, and aggregation strategies, severely, which limits reproducibility. To address these challenges, we propose \textbf{\textit{OmniScore}}, a family of complementary, deterministic learned metrics developed using small size (<1B) parameter models. OmniScore approximates LLM-judge behavior while preserving the low latency and consistency of traditional model-based scoring. We trained the models large-scale synthetic supervision (\sim564k instances, in \textbf{107 languages}) and evaluated using 8,617 manually annotated instances. The OmniScore family supports reliable, multi-dimensional scores across a variety of settings, including reference-based, source-grounded, and hybrid evaluations. We evaluate these models across question answering (QA), translation, and summarization in \textbf{6 languages}. Our results demonstrate that lightweight, deterministic learned metrics provide a highly practical and scalable alternative to frontier LLMs. Our models and datasets can be found at https://huggingface.co/collections/QCRI/omniscore