AI Navigate

PhysVideo: Physically Plausible Video Generation with Cross-View Geometry Guidance

arXiv cs.CV / 3/20/2026

📰 NewsModels & Research

Key Points

  • The paper introduces PhysVideo, a two-stage framework for physically plausible video generation, with Phys4View for physics-aware foreground video generation and VideoSyn for background-aware synthesis.
  • Phys4View uses physics-aware attention, geometry-enhanced cross-view attention, and temporal attention to better capture 3D dynamics from multiple orthogonal viewpoints.
  • The authors build PhysMV, a dataset of 40,000 scenes (four orthogonal viewpoints each, totaling 160,000 sequences) to train and evaluate physics-informed video generation.
  • Experiments show PhysVideo improves physical realism and spatio-temporal coherence compared with existing video generation methods, enabling more controllable video synthesis in context with background dynamics.

Abstract

Recent progress in video generation has led to substantial improvements in visual fidelity, yet ensuring physically consistent motion remains a fundamental challenge. Intuitively, this limitation can be attributed to the fact that real-world object motion unfolds in three-dimensional space, while video observations provide only partial, view-dependent projections of such dynamics. To address these issues, we propose PhysVideo, a two-stage framework that first generates physics-aware orthogonal foreground videos and then synthesizes full videos with background. In the first stage, Phys4View leverages physics-aware attention to capture the influence of physical attributes on motion dynamics, and enhances spatio-temporal consistency by incorporating geometry-enhanced cross-view attention and temporal attention. In the second stage, VideoSyn uses the generated foreground videos as guidance and learns the interactions between foreground dynamics and background context for controllable video synthesis. To support training, we construct PhysMV, a dataset containing 40K scenes, each consisting of four orthogonal viewpoints, resulting in a total of 160K video sequences. Extensive experiments demonstrate that PhysVideo significantly improves physical realism and spatial-temporal coherence over existing video generation methods. Home page: https://anonymous.4open.science/w/Phys4D/.