Robot Learning from Human Videos: A Survey

arXiv cs.CV / 5/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The survey identifies scaling robot data as a major bottleneck in embodied AI and robotics, and highlights human-video-based learning as a promising approach to alleviate it.
  • It reviews foundational policy learning concepts in robotics and the key interfaces for incorporating human videos into robot learning pipelines.
  • The paper proposes a hierarchical taxonomy for transferring human videos into robot skills, organized by task-, observation-, and action-oriented pathways, and analyzes how these methods relate across data setups and learning paradigms.
  • It examines data foundations, including widely used human-video datasets, video generation methods, and large-scale statistics on dataset creation and utilization trends.
  • The survey concludes by outlining core challenges and limitations of the field and suggesting directions for future research.
  • The work also provides a curated, up-to-date reading list via a linked GitHub repository.

Abstract

A critical bottleneck hindering further advancement in embodied AI and robotics is the challenge of scaling robot data. To address this, the field of learning robot manipulation skills from human video data has attracted rapidly growing attention in recent years, driven by the abundance of human activity videos and advances in computer vision. This line of research promises to enable robots to acquire skills passively from the vast and readily available resource of human demonstrations, substantially favoring scalable learning for generalist robotic systems. Therefore, we present this survey to provide a comprehensive and up-to-date review of human-video-based learning techniques in robotics, focusing on both human-robot skill transfer and data foundations. We first review the policy learning foundations in robotics, and then describe the fundamental interfaces to incorporate human videos. Subsequently, we introduce a hierarchical taxonomy of transferring human videos to robot skills, covering task-, observation-, and action-oriented pathways, along with a cross-family analysis of their couplings with different data configurations and learning paradigms. In addition, we investigate the data foundations including widely-used human video datasets and video generation schemes, and provide large-scale statistical trends in dataset development and utilization. Ultimately, we emphasize the challenges and limitations intrinsic to this field, and delineate potential avenues for future research. The paper list of our survey is available at https://github.com/IRMVLab/awesome-robot-learning-from-human-videos.