A Unified Multi-Layer Framework for Skill Acquisition from Imperfect Human Demonstrations

arXiv cs.RO / 4/10/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that existing Human-Robot Interaction (HRI) skill-teaching methods and learning-from-demonstration (LfD) approaches are fragmented and lack a single framework that is both efficient, intuitive, and broadly safe.
  • It proposes a unified three-layer control framework for robust, compliant LfD built on a foundation of universal robot compliance.
  • The first layer introduces a real-time LfD method that learns both trajectory and variable impedance from a single human demonstration to improve efficiency and reproduction fidelity.
  • The second layer adds null-space optimization to manage kinematic singularities during kinesthetic teaching and maintain consistent interaction feel.
  • The third layer introduces null-space compliance so the robot can adapt compliantly to external interactions after learning while preserving main-task performance, validated on a 7-DOF KUKA LWR.

Abstract

Current Human-Robot Interaction (HRI) systems for skill teaching are fragmented, and existing approaches in the literature do not offer a cohesive framework that is simultaneously efficient, intuitive, and universally safe. This paper presents a novel, layered control framework that addresses this fundamental gap by enabling robust, compliant Learning from Demonstration (LfD) built upon a foundation of universal robot compliance. The proposed approach is structured in three progressive and interconnected stages. First, we introduce a real-time LfD method that learns both the trajectory and variable impedance from a single demonstration, significantly improving efficiency and reproduction fidelity. To ensure high-quality and intuitive {kinesthetic teaching}, we then present a null-space optimization strategy that proactively manages singularities and provides a consistent interaction feel during human demonstration. Finally, to ensure generalized safety, we introduce a foundational null-space compliance method that enables the entire robot body to compliantly adapt to post-learning external interactions without compromising main task performance. This final contribution transforms the system into a versatile HRI platform, moving beyond end-effector (EE)-specific applications. We validate the complete framework through comprehensive comparative experiments on a 7-DOF KUKA LWR robot. The results demonstrate a safer, more intuitive, and more efficient unified system for a wide range of human-robot collaborative tasks.