AI Navigate

VCBench: A Streaming Counting Benchmark for Spatial-Temporal State Maintenance in Long Videos

arXiv cs.CV / 3/16/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • VCBench introduces a streaming counting benchmark to diagnose how video models maintain spatial-temporal world state during long videos.
  • It decomposes state maintenance into object counting (visible objects vs. cumulative identities) and event counting (instant actions vs. complete activity cycles) across eight fine-grained subcategories.
  • The dataset contains 406 videos with frame-by-frame annotations of 10,071 event moments and object state changes, plus 1,000 streaming QA pairs and 4,576 query points.
  • Evaluation shows mainstream video-language models have significant deficiencies in state maintenance, especially in periodic event counting, highlighting the benchmark's diagnostic value.

Abstract

Video understanding requires models to continuously track and update world state during playback. While existing benchmarks have advanced video understanding evaluation across multiple dimensions, the observation of how models maintain world state remains insufficient. We propose VCBench, a streaming counting benchmark that repositions counting as a minimal probe for diagnosing world state maintenance capability. We decompose this capability into object counting (tracking currently visible objects vs.\ tracking cumulative unique identities) and event counting (detecting instantaneous actions vs.\ tracking complete activity cycles), forming 8 fine-grained subcategories. VCBench contains 406 videos with frame-by-frame annotations of 10,071 event occurrence moments and object state change moments, generating 1,000 streaming QA pairs with 4,576 query points along timelines. By observing state maintenance trajectories through streaming multi-point queries, we design three complementary metrics to diagnose numerical precision, trajectory consistency, and temporal awareness. Evaluation on mainstream video-language models shows that current models still exhibit significant deficiencies in spatial-temporal state maintenance, particularly struggling with tasks like periodic event counting. VCBench provides a diagnostic framework for measuring and improving state maintenance in video understanding systems.