Grounding Video Reasoning in Physical Signals

arXiv cs.CV / 4/24/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that physical video understanding is not just about correctly answering “what” happened, but also about accurately grounding events in “when” and “where” in time and space.
  • It introduces a new grounded benchmark that extends V-STaR’s what–when–where evaluation to four video datasets, six physics domains, three prompt families, and four input perturbation conditions.
  • The benchmark is built by converting each video clip into a shared grounded event record, then generating query families (physics, vstar_like, and neutral_rstr) from that record with shared temporal/spatial targets.
  • Experiments across model and prompt families show that physics prompts perform best overall, vstar_like provides the clearest non-physics semantic comparison, and neutral_rstr acts as a tougher templated control.
  • The authors find that robustness to prompt/perturbations is selective (not universal), perturbation gains concentrate in weaker original cases, and spatial grounding is consistently the weakest component of performance.

Abstract

Physical video understanding requires more than naming an event correctly. A model can answer a question about pouring, sliding, or collision from textual regularities while still failing to localize the event in time or space. We introduce a grounded benchmark for physical video understanding that extends the what--when--where evaluation structure of V-STaR to four video sources, six physics domains, three prompt families (physics, vstar_like, and neutral_rstr), and four input conditions (original, shuffled, ablated, and frame-masked). The benchmark contains 1,560 base video clips from SSV2, YouCook2, HoloAssist, and Roundabout-TAU. Each clip is first converted into a shared grounded event record, and the three query families are derived from that record. Temporal and spatial targets are shared across prompt families, while the non-physics families use deterministic family-appropriate semantic a_what targets derived from the same record. Across models and prompt families, physics remains the strongest regime overall, vstar_like is the clearest non-physics semantic comparison, and neutral_rstr behaves as a harder templated control. Prompt-family robustness is selective rather than universal, perturbation gains cluster in weak original cases, and spatial grounding is the weakest across settings. These results suggest that video Q&A reasoning benchmarks shall report physically grounded, prompt-aware, and perturbation-aware diagnostics alongside aggregate accuracy.