Multi-Camera View Scaling for Data-Efficient Robot Imitation Learning

arXiv cs.RO / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a key bottleneck in robotic imitation learning: policies generalize poorly when expert demonstrations lack diversity, yet collecting diverse trajectories across environments is expensive and difficult.
  • It proposes a data-efficient framework that increases training diversity by scaling multi-camera viewpoints for each expert trajectory, effectively creating pseudo-demonstrations without requiring additional human effort.
  • The authors study how different action-space choices interact with view scaling, finding that camera-space representations and richer multi-view diversity can further improve invariance in visual features.
  • A multiview action aggregation method is introduced so that policies trained with multiple cameras can still be deployed effectively with single-view inputs.
  • Experiments in both simulation and real-world manipulation tasks show significant improvements in data efficiency and generalization over single-view imitation learning baselines, with minimal added hardware complexity.

Abstract

The generalization ability of imitation learning policies for robotic manipulation is fundamentally constrained by the diversity of expert demonstrations, while collecting demonstrations across varied environments is costly and difficult in practice. In this paper, we propose a practical framework that exploits inherent scene diversity without additional human effort by scaling camera views during demonstration collection. Instead of acquiring more trajectories, multiple synchronized camera perspectives are used to generate pseudo-demonstrations from each expert trajectory, which enriches the training distribution and improves viewpoint invariance in visual representations. We analyze how different action spaces interact with view scaling and show that camera-space representations further enhance diversity. In addition, we introduce a multiview action aggregation method that allows single-view policies to benefit from multiple cameras during deployment. Extensive experiments in simulation and real-world manipulation tasks demonstrate significant gains in data efficiency and generalization compared to single-view baselines. Our results suggest that scaling camera views provides a practical and scalable solution for imitation learning, which requires minimal additional hardware setup and integrates seamlessly with existing imitation learning algorithms. The website of our project is https://yichen928.github.io/robot_multiview.