Active Stereo-Camera Outperforms Multi-Sensor Setup in ACT Imitation Learning for Humanoid Manipulation

arXiv cs.RO / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper benchmarks 14 different sensor combinations for action-chunking imitation learning (ACT) on the Unitree G1 humanoid with three-finger hands across two manipulation tasks.
  • In data-limited regimes, the study finds that adding more modalities can reduce performance due to training inefficiencies, and it emphasizes that “more sensors” is not automatically better.
  • A minimal active stereo-camera setup achieves strong results, reaching 87.5% success in spatial generalization and 94.4% success in a structured manipulation task.
  • Adding pressure/tactile sensors to the active stereo setup significantly lowers performance to 67.3% in the structured task, attributed to low signal-to-noise ratio.
  • The authors release an open-source Unified Ablation Framework that uses sensor masking over a master dataset to systematically evaluate how sensory choices affect IL outcomes.

Abstract

The complexity of teaching humanoid robots new tasks is one of the major reasons hindering their widespread adoption in the industry. While Imitation Learning (IL), particularly Action Chunking with Transformers (ACT), enables rapid task acquisition, there is no consensus yet on the optimal sensory hardware required for manipulation tasks. This paper benchmarks 14 sensor combinations on the Unitree G1 humanoid robot equipped with three-finger hands for two manipulation tasks. We explicitly evaluate the integration of tactile and proprioceptive modalities alongside active vision. Our analysis demonstrates that strategic sensor selection can outperform complex configurations in data-limited regimes while reducing computational overhead. We develop an open-source Unified Ablation Framework that utilizes sensor masking on a comprehensive master dataset. Results indicate that additional modalities often degrade performance for IL with limited data. A minimal active stereo-camera setup outperformed complex multi-sensor configurations, achieving 87.5% success in a spatial generalization task and 94.4% in a structured manipulation task. Conversely, adding pressure sensors to this setup reduced success to 67.3% in the latter task due to a low signal-to-noise ratio. We conclude that in data-limited regimes, active vision offers a superior trade-off between robustness and complexity. While tactile modalities may require larger datasets to be effective, our findings validate that strategic sensor selection is critical for designing an efficient learning process.