ActiveGlasses: Learning Manipulation with Active Vision from Ego-centric Human Demonstration

arXiv cs.RO / 4/10/2026

💬 OpinionSignals & Early TrendsModels & Research

Key Points

  • The paper introduces ActiveGlasses, a robot learning system that captures manipulation and perception from ego-centric human demonstrations using active vision.
  • It uses a stereo camera on smart glasses for both data collection and—by mounting the same camera on a 6-DoF perception arm—policy inference during zero-shot deployment.
  • To support zero-shot transfer across platforms, the method extracts object trajectories from demonstrations and trains an object-centric point-cloud policy that jointly predicts manipulation actions and head movement.
  • Experiments on multiple occlusion-heavy, precision interaction tasks show ActiveGlasses achieves zero-shot transfer, outperforms strong baselines with the same hardware, and generalizes across two different robot platforms.

Abstract

Large-scale real-world robot data collection is a prerequisite for bringing robots into everyday deployment. However, existing pipelines often rely on specialized handheld devices to bridge the embodiment gap, which not only increases operator burden and limits scalability, but also makes it difficult to capture the naturally coordinated perception-manipulation behaviors of human daily interaction. This challenge calls for a more natural system that can faithfully capture human manipulation and perception behaviors while enabling zero-shot transfer to robotic platforms. We introduce ActiveGlasses, a system for learning robot manipulation from ego-centric human demonstrations with active vision. A stereo camera mounted on smart glasses serves as the sole perception device for both data collection and policy inference: the operator wears it during bare-hand demonstrations, and the same camera is mounted on a 6-DoF perception arm during deployment to reproduce human active vision. To enable zero-transfer, we extract object trajectories from demonstrations and use an object-centric point-cloud policy to jointly predict manipulation and head movement. Across several challenging tasks involving occlusion and precise interaction, ActiveGlasses achieves zero-shot transfer with active vision, consistently outperforms strong baselines under the same hardware setup, and generalizes across two robot platforms.