RefereeBench: Are Video MLLMs Ready to be Multi-Sport Referees

arXiv cs.CL / 4/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces RefereeBench, a large-scale, human-annotated benchmark to evaluate whether multimodal LLMs can act as automatic sports referees across 11 sports using 925 curated videos and 6,475 QA pairs.
  • It assesses five key officiating abilities—foul existence, classification, reasoning, entity perception, and temporal grounding—to test rule-grounded, multimodal decision-making rather than generic video understanding.
  • Evaluations of leading models (including Doubao-Seed-1.8 and Gemini-3-Pro) show only about ~60% accuracy, and even the best open-source result (Qwen3-VL) reaches about 47%, indicating limited reliability.
  • Analysis finds models are better at detecting incidents and entities, but they commonly fail on rule application and temporal grounding and often over-call fouls on normal clips.
  • The benchmark is positioned as evidence that future MLLMs must better integrate domain knowledge with multimodal understanding to enable trustworthy AI-assisted officiating and broader multimodal decision-making.

Abstract

While Multimodal Large Language Models (MLLMs) excel at generic video understanding, their ability to support specialized, rule-grounded decision-making remains insufficiently explored. In this paper, we introduce RefereeBench, the first large-scale benchmark for evaluating MLLMs as automatic sports referees. Spanning 11 sports with 925 curated videos and 6,475 QA pairs, RefereeBench evaluates five core officiating abilities: foul existence, foul and penalty classification, foul and penalty reasoning, entity perception, and temporal grounding. The benchmark is fully human-annotated to ensure high-quality annotations grounded in authentic officiating logic and multimodal evidence. Extensive evaluations of state-of-the-art MLLMs show that even the strongest models, such as Doubao-Seed-1.8 and Gemini-3-Pro, achieve only around 60% accuracy, while the strongest open-source model, Qwen3-VL, reaches only 47%. These results indicate that current models remain far from being reliable sports referees. Further analysis shows that while models can often identify incidents and involved entities, they struggle with rule application and temporal grounding, and frequently over-call fouls on normal clips. Our benchmark highlights the need for future MLLMs that better integrate domain knowledge and multimodal understanding, advancing trustworthy AI-assisted officiating and broader multimodal decision-making.