AI Navigate

PhysAlign: Physics-Coherent Image-to-Video Generation through Feature and 3D Representation Alignment

arXiv cs.CV / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • PhysAlign provides a physics-coherent image-to-video generation framework that mitigates temporal incoherence and physics violations inherent in many video diffusion models.
  • To address the scarcity of physics-annotated videos, the approach trains on a controllable synthetic dataset generated from rigid-body simulations with accurate 3D annotations.
  • It constructs a unified physical latent space by coupling explicit 3D geometry constraints with Gram-based spatio-temporal relational alignment to extract kinematic priors from video foundation models.
  • Experiments show PhysAlign significantly outperforms existing VDMs on tasks requiring complex physical reasoning and temporal stability, while preserving zero-shot visual quality.
  • The work aims to bridge visual synthesis with rigid-body kinematics and presents a practical paradigm for physics-grounded video generation; see the project page at https://physalign.github.io/PhysAlign.

Abstract

Video Diffusion Models (VDMs) offer a promising approach for simulating dynamic scenes and environments, with broad applications in robotics and media generation. However, existing models often generate temporally incoherent content that violates basic physical intuition, significantly limiting their practical applicability. We propose PhysAlign, an efficient framework for physics-coherent image-to-video (I2V) generation that explicitly addresses this limitation. To overcome the critical scarcity of physics-annotated videos, we first construct a fully controllable synthetic data generation pipeline based on rigid-body simulation, yielding a highly-curated dataset with accurate, fine-grained physics and 3D annotations. Leveraging this data, PhysAlign constructs a unified physical latent space by coupling explicit 3D geometry constraints with a Gram-based spatio-temporal relational alignment that extracts kinematic priors from video foundation models. Extensive experiments demonstrate that PhysAlign significantly outperforms existing VDMs on tasks requiring complex physical reasoning and temporal stability, without compromising zero-shot visual quality. PhysAlign shows the potential to bridge the gap between raw visual synthesis and rigid-body kinematics, establishing a practical paradigm for genuinely physics-grounded video generation. The project page is available at https://physalign.github.io/PhysAlign.