PhysVid: Physics Aware Local Conditioning for Generative Video Models

arXiv cs.AI / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current generative video models often produce visually plausible but physically incorrect motion, reducing real-world reliability.
  • It introduces PhysVid, a physics-aware local conditioning method that adds physics-grounded state/interaction/constraint descriptions for temporally contiguous frame chunks and fuses them with global prompts via chunk-aware cross-attention during training.
  • During inference, PhysVid uses “negative physics prompts” that describe locally relevant law violations to steer the model away from implausible trajectories.
  • Experiments on VideoPhy show about a 33% improvement in physical commonsense scores versus baseline generators, with up to an 8% gain on VideoPhy2.
  • The authors conclude that local, physics-grounded guidance meaningfully increases physical plausibility and is a step toward more physics-grounded video generation.

Abstract

Generative video models achieve high visual fidelity but often violate basic physical principles, limiting reliability in real-world settings. Prior attempts to inject physics rely on conditioning: frame-level signals are domain-specific and short-horizon, while global text prompts are coarse and noisy, missing fine-grained dynamics. We present PhysVid, a physics-aware local conditioning scheme that operates over temporally contiguous chunks of frames. Each chunk is annotated with physics-grounded descriptions of states, interactions, and constraints, which are fused with the global prompt via chunk-aware cross-attention during training. At inference, we introduce negative physics prompts (descriptions of locally relevant law violations) to steer generation away from implausible trajectories. On VideoPhy, PhysVid improves physical commonsense scores by \approx 33\% over baseline video generators, and by up to \approx 8\% on VideoPhy2. These results show that local, physics-aware guidance substantially increases physical plausibility in generative video and marks a step toward physics-grounded video models.