Towards High-Consistency Embodied World Model with Multi-View Trajectory Videos

arXiv cs.RO / 4/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces MTV-World, an embodied world model designed to improve consistency between predicted robotic actions and real-world physical interactions.
  • Instead of feeding low-level joint actions directly for control, it uses multi-view trajectory-video inputs derived from camera parameters and Cartesian-space transformations to drive visuomotor prediction.
  • Because projecting 3D actions into 2D views loses spatial information, the method adds a multi-view framework that compensates for that loss and targets higher physical-world consistency.
  • It forecasts future frames conditioned on an initial frame for each view and evaluates motion precision and object interaction accuracy using an auto-evaluation pipeline that combines multimodal large models with video object segmentation.
  • For spatial consistency, the authors define object location matching and use the Jaccard Index as an evaluation metric, reporting strong performance in complex dual-arm scenarios.

Abstract

Embodied world models aim to predict and interact with the physical world through visual observations and actions. However, existing models struggle to accurately translate low-level actions (e.g., joint positions) into precise robotic movements in predicted frames, leading to inconsistencies with real-world physical interactions. To address these limitations, we propose MTV-World, an embodied world model that introduces Multi-view Trajectory-Video control for precise visuomotor prediction. Specifically, instead of directly using low-level actions for control, we employ trajectory videos obtained through camera intrinsic and extrinsic parameters and Cartesian-space transformation as control signals. However, projecting 3D raw actions onto 2D images inevitably causes a loss of spatial information, making a single view insufficient for accurate interaction modeling. To overcome this, we introduce a multi-view framework that compensates for spatial information loss and ensures high-consistency with physical world. MTV-World forecasts future frames based on multi-view trajectory videos as input and conditioning on an initial frame per view. Furthermore, to systematically evaluate both robotic motion precision and object interaction accuracy, we develop an auto-evaluation pipeline leveraging multimodal large models and referring video object segmentation models. To measure spatial consistency, we formulate it as an object location matching problem and adopt the Jaccard Index as the evaluation metric. Extensive experiments demonstrate that MTV-World achieves precise control execution and accurate physical interaction modeling in complex dual-arm scenarios.