From Priors to Perception: Grounding Video-LLMs in Physical Reality

arXiv cs.CV / 5/7/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • Video-LLMs can show systematic weaknesses in fine-grained physical reasoning, including failures when visuals contradict statistical expectations.
  • The paper argues these errors come from “Semantic Prior Dominance,” where internal narrative priors hijack reasoning rather than from a basic lack of perception.
  • It introduces the Programmatic Adversarial Curriculum (PACC), a high-fidelity adversarial video dataset generated from physical laws to separate visual artifacts from true logical failures.
  • It also proposes Visual-Anchored Reasoning Chain (VARC), which requires models to ground judgments in low-level visual facts before performing logical reasoning.
  • Experiments indicate that standard LoRA fine-tuning using PACC (without architectural changes) substantially improves state-of-the-art models’ physical reasoning performance.

Abstract

While Video Large Language Models (Video-LLMs) excel in general understanding, they exhibit systematic deficits in fine-grained physical reasoning. Existing interventions not only suffer from limited generalization but fundamentally conflate generative artifacts with genuine physical fallacies. Furthermore, we find that models fail systematically not only in anti-physics anomalies but also in counter-intuitive scenarios where visual facts contradict statistical expectations. Accordingly, we propose the Unified Attribution Theory: this dual failure stems not from perception deficiency, but from Semantic Prior Dominance -- the reasoning mechanism is deeply hijacked by internal narrative scripts. To address this, we construct the Programmatic Adversarial Curriculum (PACC), the first high-fidelity adversarial video dataset synthesized based on physical laws, thoroughly decoupling visual artifacts from logical errors. Concurrently, we design the Visual-Anchored Reasoning Chain (VARC) to force models to explicitly ground their judgments in low-level visual facts prior to logical adjudication. Experiments demonstrate that without invasive architectural modifications, standard LoRA fine-tuning with the PACC curriculum effectively neutralizes prior interference in state-of-the-art (SOTA) models, yielding a substantial leap in physical reasoning capabilities.