CRAFT: Video Diffusion for Bimanual Robot Data Generation

arXiv cs.RO / 4/7/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • CRAFT introduces a diffusion-based framework that generates scalable, temporally coherent bimanual robot demonstration videos with associated action labels for training.
  • The method conditions video diffusion on Canny/edge-based structural cues derived from simulator trajectories, enabling physically plausible trajectory variations and a unified augmentation pipeline.
  • It supports diverse synthetic variations including object pose changes, camera viewpoint/lighting/background shifts, cross-embodiment transfer, and multi-view synthesis.
  • By starting from only a few real-world demonstrations and avoiding real-robot replay, CRAFT aims to address costly and low-diversity real-world data limitations and improve Sim2Real training.
  • Experiments on both simulated and real-world bimanual tasks show higher success rates than existing augmentation and simple data-scaling baselines, indicating better generalization for dual-arm manipulation.

Abstract

Bimanual robot learning from demonstrations is fundamentally limited by the cost and narrow visual diversity of real-world data, which constrains policy robustness across viewpoints, object configurations, and embodiments. We present Canny-guided Robot Data Generation using Video Diffusion Transformers (CRAFT), a video diffusion-based framework for scalable bimanual demonstration generation that synthesizes temporally coherent manipulation videos while producing action labels. By conditioning video diffusion on edge-based structural cues extracted from simulator-generated trajectories, CRAFT produces physically plausible trajectory variations and supports a unified augmentation pipeline spanning object pose changes, camera viewpoints, lighting and background variations, cross-embodiment transfer, and multi-view synthesis. We leverage a pre-trained video diffusion model to convert simulated videos, along with action labels from the simulation trajectories, into action-consistent demonstrations. Starting from only a few real-world demonstrations, CRAFT generates a large, visually diverse set of photorealistic training data, bypassing the need to replay demonstrations on the real robot (Sim2Real). Across simulated and real-world bimanual tasks, CRAFT improves success rates over existing augmentation strategies and straightforward data scaling, demonstrating that diffusion-based video generation can substantially expand demonstration diversity and improve generalization for dual-arm manipulation tasks. Our project website is available at: https://craftaug.github.io/