VideoZeroBench: Probing the Limits of Video MLLMs with Spatio-Temporal Evidence Verification

arXiv cs.CV / 4/3/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces VideoZeroBench, a new hierarchical benchmark for long-video question answering that verifies spatio-temporal evidence rather than relying only on answer accuracy.
  • It contains 500 manually annotated questions across 13 domains, each paired with temporal intervals and spatial bounding boxes that serve as required evidence for correct predictions.
  • A five-level evaluation protocol separates answering generation from temporal grounding and spatial grounding by progressively tightening constraints on what the model must correctly localize.
  • Results indicate a large gap between surface-level correctness and evidence-based reasoning, with Gemini-3-Pro answering under 17% correctly at Level-3 and essentially no model reaching above 1% when both answering and precise spatio-temporal localization are required at Level-5.
  • The authors provide additional analyses (e.g., performance vs. minimal evidence spans and atomic abilities) and plan to release the benchmark and code publicly to support future grounded video reasoning research.

Abstract

Recent video multimodal large language models achieve impressive results across various benchmarks. However, current evaluations suffer from two critical limitations: (1) inflated scores can mask deficiencies in fine-grained visual understanding and reasoning, and (2) answer correctness is often measured without verifying whether models identify the precise spatio-temporal evidence supporting their predictions. To address this, we present VideoZeroBench, a hierarchical benchmark designed for challenging long-video question answering that rigorously verifies spatio-temporal evidence. It comprises 500 manually annotated questions across 13 domains, paired with temporal intervals and spatial bounding boxes as evidence. To disentangle answering generation, temporal grounding, and spatial grounding, we introduce a five-level evaluation protocol that progressively tightens evidence requirements. Experiments show that even Gemini-3-Pro correctly answers fewer than 17% of questions under the standard end-to-end QA setting (Level-3). When grounding constraints are imposed, performance drops sharply: No model exceeds 1% accuracy when both correct answering and accurate spatio-temporal localization are required (Level-5), with most failing to achieve any correct grounded predictions. These results expose a significant gap between surface-level answer correctness and genuine evidence-based reasoning, revealing that grounded video understanding remains a bottleneck for long-video QA. We further analyze performance across minimal evidence spans, atomic abilities, and inference paradigms, providing insights for future research in grounded video reasoning. The benchmark and code will be made publicly available.