Litmus (Re)Agent: A Benchmark and Agentic System for Predictive Evaluation of Multilingual Models

arXiv cs.CL / 4/13/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses predictive multilingual evaluation, aiming to estimate target-language performance when benchmark results are missing for specific languages or tasks.
  • It introduces a controlled benchmark with 1,500 questions across six tasks and five evidence scenarios, separating accessible evidence from ground-truth labels to test inference over incomplete literature.
  • It proposes Litmus (Re)Agent, a DAG-orchestrated agentic system that breaks queries into hypotheses, retrieves evidence, and synthesizes predictions using feature-aware aggregation.
  • Experiments across six systems show Litmus (Re)Agent achieves the best overall performance, with the biggest improvements in transfer-heavy settings where direct evidence is weak or absent.
  • The authors conclude that structured agentic reasoning can effectively predict multilingual model performance under sparse or uneven evaluation evidence.

Abstract

We study predictive multilingual evaluation: estimating how well a model will perform on a task in a target language when direct benchmark results are missing. This problem is common in multilingual deployment, where evaluation coverage is sparse and published evidence is uneven across languages, tasks, and model families. We introduce a controlled benchmark of 1,500 questions spanning six tasks and five evidence scenarios. The benchmark separates accessible evidence from ground truth, enabling evaluation of systems that must infer missing results from incomplete literature evidence. We also present Litmus (Re)Agent, a DAG-orchestrated agentic system that decomposes queries into hypotheses, retrieves evidence, and synthesises predictions through feature-aware aggregation. Across six systems, Litmus (Re)Agent achieves the best overall performance, with the largest gains in transfer-heavy scenarios where direct evidence is weak or absent. These results show that structured agentic reasoning is a promising approach to multilingual performance estimation under incomplete evidence.