Bridging the Embodiment Gap: Disentangled Cross-Embodiment Video Editing

arXiv cs.RO / 5/6/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper tackles a key robotics challenge: learning manipulation from human videos while overcoming distribution shift between human and robot embodiments.
  • It proposes disentangled cross-embodiment video editing by factorizing a demonstration into two independent latent spaces—one for task information and one for embodiment/kinematics—using a dual contrastive objective.
  • A parameter-efficient adapter injects the learned latent codes into a frozen video diffusion model to generate coherent robot execution videos from a single human demonstration.
  • The method is designed to avoid the need for paired cross-embodiment training data (human–robot aligned examples).
  • Experiments report improved temporal consistency and morphological accuracy in the generated robot demonstrations, positioning the approach as scalable for leveraging large-scale human video data for robot learning.

Abstract

Learning robotic manipulation from human videos is a promising solution to the data bottleneck in robotics, but the distribution shift between humans and robots remains a critical challenge. Existing approaches often produce entangled representations, where task-relevant information is coupled with human-specific kinematics, limiting their adaptability. We propose a generative framework for cross-embodiment video editing that directly addresses this by learning explicitly disentangled task and embodiment representations. Our method factorizes a demonstration video into two orthogonal latent spaces by enforcing a dual contrastive objective: it minimizes mutual information between the spaces to ensure independence while maximizing intra-space consistency to create stable representations. A parameter-efficient adapter injects these latent codes into a frozen video diffusion model, enabling the synthesis of a coherent robot execution video from a single human demonstration, without requiring paired cross-embodiment data. Experiments show our approach generates temporally consistent and morphologically accurate robot demonstrations, offering a scalable solution to leverage internet-scale human video for robot learning.