Multi-View Video Diffusion Policy: A 3D Spatio-Temporal-Aware Video Action Model

arXiv cs.RO / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces MV-VDP, a multi-view video diffusion policy for robotic manipulation that jointly models 3D spatial structure and temporal evolution of the environment.
  • MV-VDP predicts both multi-view heatmap videos and RGB videos, aiming to bridge the representation gap between video pretraining and action fine-tuning while also producing interpretable future state cues.
  • The authors report data-efficient performance, claiming strong results on complex real-world tasks using only ten demonstration trajectories without additional pretraining.
  • Experiments on Meta-World and real-world robotic platforms show robustness to hyperparameter changes and generalization to out-of-distribution settings.
  • MV-VDP is reported to outperform prior approaches including video-prediction-based, 3D-based, and vision-language-action models, setting a new state of the art for data-efficient multi-task manipulation.

Abstract

Robotic manipulation requires understanding both the 3D spatial structure of the environment and its temporal evolution, yet most existing policies overlook one or both. They typically rely on 2D visual observations and backbones pretrained on static image--text pairs, resulting in high data requirements and limited understanding of environment dynamics. To address this, we introduce MV-VDP, a multi-view video diffusion policy that jointly models the 3D spatio-temporal state of the environment. The core idea is to simultaneously predict multi-view heatmap videos and RGB videos, which 1) align the representation format of video pretraining with action finetuning, and 2) specify not only what actions the robot should take, but also how the environment is expected to evolve in response to those actions. Extensive experiments show that MV-VDP enables data-efficient, robust, generalizable, and interpretable manipulation. With only ten demonstration trajectories and without additional pretraining, MV-VDP successfully performs complex real-world tasks, demonstrates strong robustness across a range of model hyperparameters, generalizes to out-of-distribution settings, and predicts realistic future videos. Experiments on Meta-World and real-world robotic platforms demonstrate that MV-VDP consistently outperforms video-prediction--based, 3D-based, and vision--language--action models, establishing a new state of the art in data-efficient multi-task manipulation.