Force-Aware Residual DAgger via Trajectory Editing for Precision Insertion with Impedance Control

arXiv cs.RO / 4/10/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents Trajectory Editing Residual Dataset Aggregation (TER-DAgger), a human-in-the-loop imitation learning framework that reduces covariate shift for contact-rich precision insertion by learning residual policies using optimization-based trajectory editing.
  • TER-DAgger combines robot rollouts with human corrective trajectories through a smooth fusion mechanism, aiming to provide consistent and stable supervision during execution.
  • It introduces a force-aware failure anticipation trigger that requests human intervention only when predicted and measured end-effector forces disagree, cutting down the need for continuous expert monitoring.
  • All learned policies are run under a Cartesian impedance control framework to maintain compliant, safe behavior during contact interactions.
  • Experiments in simulation and real-world insertion tasks report more than a 37% improvement in average success rate over several behavior cloning and correction/retraining baselines.

Abstract

Imitation learning (IL) has shown strong potential for contact-rich precision insertion tasks. However, its practical deployment is often hindered by covariate shift and the need for continuous expert monitoring to recover from failures during execution. In this paper, we propose Trajectory Editing Residual Dataset Aggregation (TER-DAgger), a scalable and force-aware human-in-the-loop imitation learning framework that mitigates covariate shift by learning residual policies through optimization-based trajectory editing. This approach smoothly fuses policy rollouts with human corrective trajectories, providing consistent and stable supervision. Second, we introduce a force-aware failure anticipation mechanism that triggers human intervention only when discrepancies arise between predicted and measured end-effector forces, significantly reducing the requirement for continuous expert monitoring. Third, all learned policies are executed within a Cartesian impedance control framework, ensuring compliant and safe behavior during contact-rich interactions. Extensive experiments in both simulation and real-world precision insertion tasks show that TER-DAgger improves the average success rate by over 37\% compared to behavior cloning, human-guided correction, retraining, and fine-tuning baselines, demonstrating its effectiveness in mitigating covariate shift and enabling scalable deployment in contact-rich manipulation.