From Action Labels to Sets: Rethinking Action Supervision for Imitation Learning from Corrective Feedback

arXiv cs.RO / 5/1/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • Behavior cloning (BC) is brittle when demonstrations contain imperfect or noisy actions because standard pointwise action-label supervision can push learned policies away from the true desired behavior.
  • The paper proposes CLIC (Contrastive policy Learning from Interactive Corrections), which replaces single-action targets with set-valued action targets derived from human corrective feedback.
  • CLIC trains policies to assign probability mass over sets of desirable actions, enabling the method to handle both absolute and relative corrections and to capture multi-modal behavior.
  • Experiments in both simulation and on real robots indicate that CLIC matches state-of-the-art performance with accurate data while offering substantially improved robustness to noisy, partial, and relative feedback.
  • The authors make their implementation publicly available, facilitating reproduction and further research use.

Abstract

Behavior cloning (BC) optimizes policies by treating human demonstrations as pointwise action labels. While effective with accurate action labels, this formulation is brittle in practice: when human-provided actions are imperfect, treating each label as an exact target can steer the policy away from the underlying desired behavior, particularly when expressive models are used (e.g., energy-based models). As a result, we propose a human-in-the-loop alternative that replaces pointwise supervision with set-valued action targets. We introduce Contrastive policy Learning from Interactive Corrections (CLIC). CLIC leverages human corrections to construct and refine sets of desired actions, and optimizes a policy to place probability mass over these sets rather than over a single action target. This formulation naturally accommodates both absolute and relative corrections and can represent complex multi-modal behaviors. Extensive simulation and real-robot experiments show that the proposed approach leads to effective policy learning across diverse settings: CLIC remains competitive with the state of the art under accurate data while being substantially more robust under noisy, relative, and partial feedback. Our implementation is publicly available at https://clic-webpage.github.io/.