AI Navigate

VIGOR: VIdeo Geometry-Oriented Reward for Temporal Generative Alignment

arXiv cs.CV / 3/18/2026

📰 NewsModels & Research

Key Points

  • The paper notes that video diffusion models lack explicit geometric supervision during training, causing artifacts such as object deformation, spatial drift, and depth violations in generated videos.
  • It introduces a geometry-based reward that leverages pretrained geometric foundation models to evaluate multi-view consistency via cross-frame reprojection error, computed in a pointwise fashion for robustness over pixel-space comparisons.
  • It proposes a geometry-aware sampling strategy that filters out low-texture and non-semantic regions to focus evaluation on geometrically meaningful areas with reliable correspondences.
  • The reward enables two pathways for alignment: post-training of a bidirectional model through supervised fine-tuning (SFT) or reinforcement learning, and inference-time optimization of a causal video model with test-time scaling, showing robustness and practical benefits without extensive retraining.

Abstract

Video diffusion models lack explicit geometric supervision during training, leading to inconsistency artifacts such as object deformation, spatial drift, and depth violations in generated videos. To address this limitation, we propose a geometry-based reward model that leverages pretrained geometric foundation models to evaluate multi-view consistency through cross-frame reprojection error. Unlike previous geometric metrics that measure inconsistency in pixel space, where pixel intensity may introduce additional noise, our approach conducts error computation in a pointwise fashion, yielding a more physically grounded and robust error metric. Furthermore, we introduce a geometry-aware sampling strategy that filters out low-texture and non-semantic regions, focusing evaluation on geometrically meaningful areas with reliable correspondences to improve robustness. We apply this reward model to align video diffusion models through two complementary pathways: post-training of a bidirectional model via SFT or Reinforcement Learning and inference-time optimization of a Causal Video Model (e.g., Streaming video generator) via test-time scaling with our reward as a path verifier. Experimental results validate the effectiveness of our design, demonstrating that our geometry-based reward provides superior robustness compared to other variants. By enabling efficient inference-time scaling, our method offers a practical solution for enhancing open-source video models without requiring extensive computational resources for retraining.