Learning Physics from Pretrained Video Models: A Multimodal Continuous and Sequential World Interaction Models for Robotic Manipulation

arXiv cs.RO / 4/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes PhysGen, a framework that repurposes pretrained autoregressive video generation models as a proxy physics simulator for robotic manipulation policy learning.
  • PhysGen learns continuous, multimodal world interaction by unifying video observations and robot actions into shared physical tokens, bridging the gap between discrete video generation and continuous robot control.
  • It introduces training/inference efficiency techniques—causal masking, inverse kinematics, Lookahead Multi-Token Prediction (L-MTP), and KV caching—to improve convergence.
  • Experiments on Libero and ManiSkill show PhysGen outperforms strong baselines, including OpenVLA and WorldVLA, with notable margin improvements.
  • In real-world tests, PhysGen achieves performance comparable to large action-pretrained models (e.g., π0) without action-specific pretraining, including on challenging tasks like grasping transparent objects.

Abstract

The scarcity of large-scale robotic data has motivated the repurposing of foundation models from other modalities for policy learning. In this work, we introduce PhysGen (Learning Physics from Pretrained Video Generation Models), a scalable continuous and sequential world interaction framework that leverages autoregressive video generation to solve robotic manipulation tasks. By treating the pretrained video model as a proxy for a physics simulator, PhysGen models the dynamic interplay between the external environment and robot actions. We introduce a multimodal continuous representation that unifies video and action into shared physical tokens, bridging the gap between discrete video generation and continuous robotic control. This approach enables the seamless transfer of implicit physical knowledge-such as object permanence and dynamics-from video pretraining to downstream manipulation.To ensure efficient convergence, we incorporate causal masking, inverse kinematics, Lookahead Multi-Token Prediction (L-MTP), and key-value (KV) caching. Experimental results on the Libero and ManiSkill benchmarks demonstrate that PhysGen consistently outperforms robust baselines, surpassing OpenVLA and WorldVLA by margins of 13.8% and 8.8%, respectively. Notably, in real-world scenarios, PhysGen matches the performance of large-scale action-pretrained models like \pi_0 without requiring prior action-specific pretraining, demonstrating superior capability in physically complex tasks such as grasping transparent objects. These findings validate the potential of extracting physical intuition from pretrained video generators to facilitate generalizable robotic manipulation.