ROPA: Synthetic Robot Pose Generation for RGB-D Bimanual Data Augmentation

arXiv cs.RO / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ROPA, an offline imitation-learning data augmentation method that fine-tunes Stable Diffusion to synthesize third-person (eye-to-hand) RGB and RGB-D observations at novel robot poses for bimanual manipulation.
  • ROPA also generates matching joint-space action labels and uses constrained optimization to keep the synthesized robot–object interaction physically consistent via gripper-to-object contact constraints.
  • Experiments on 5 simulation and 3 real-world bimanual tasks (2625 simulated and 300 real-world trials) show ROPA outperforms baseline and ablation methods.
  • The work targets a key scalability gap: improving pose/scene coverage for RGB-D imitation learning without the expensive process of collecting diverse, precise real demonstrations.
  • A project website is provided to share code and resources for the proposed augmentation approach.

Abstract

Training robust bimanual manipulation policies via imitation learning requires demonstration data with broad coverage over robot poses, contacts, and scene contexts. However, collecting diverse and precise real-world demonstrations is costly and time-consuming, which hinders scalability. Prior works have addressed this with data augmentation, typically for either eye-in-hand (wrist camera) setups with RGB inputs or for generating novel images without paired actions, leaving augmentation for eye-to-hand (third-person) RGB-D training with new action labels less explored. In this paper, we propose Synthetic Robot Pose Generation for RGB-D Bimanual Data Augmentation (ROPA), an offline imitation learning data augmentation method that fine-tunes Stable Diffusion to synthesize third-person RGB and RGB-D observations of novel robot poses. Our approach simultaneously generates corresponding joint-space action labels while employing constrained optimization to enforce physical consistency through appropriate gripper-to-object contact constraints in bimanual scenarios. We evaluate our method on 5 simulated and 3 real-world tasks. Our results across 2625 simulation trials and 300 real-world trials demonstrate that ROPA outperforms baselines and ablations, showing its potential for scalable RGB and RGB-D data augmentation in eye-to-hand bimanual manipulation. Our project website is available at: https://ropaaug.github.io/.