Failure Identification in Imitation Learning Via Statistical and Semantic Filtering

arXiv cs.RO / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that imitation learning policies for robotics are brittle in deployment because rare, out-of-distribution events (e.g., hardware faults or unexpected human actions) can cause failures despite good controlled-environment performance.
  • It proposes FIDeL, a policy-independent failure identification module that turns vision-based anomaly detection into actionable failure detection by combining compact nominal-demonstration representations, optimal-transport matching, anomaly scoring, and spatio-temporal thresholds.
  • FIDeL uses an extension of conformal prediction to set robust thresholds and a vision-language model to semantically filter benign deviations from true failures.
  • The work introduces BotFails, a multimodal real-world robotics dataset for evaluating failure detection, and reports consistent improvements over prior baselines.
  • Experiment results show FIDeL improves anomaly detection AUROC by +5.30% and boosts failure-detection accuracy by +17.38% on BotFails compared with existing methods.

Abstract

Imitation learning (IL) policies in robotics deliver strong performance in controlled settings but remain brittle in real-world deployments: rare events such as hardware faults, defective parts, unexpected human actions, or any state that lies outside the training distribution can lead to failed executions. Vision-based Anomaly Detection (AD) methods emerged as an appropriate solution to detect these anomalous failure states but do not distinguish failures from benign deviations. We introduce FIDeL (Failure Identification in Demonstration Learning), a policy-independent failure detection module. Leveraging recent AD methods, FIDeL builds a compact representation of nominal demonstrations and aligns incoming observations via optimal transport matching to produce anomaly scores and heatmaps. Spatio-temporal thresholds are derived with an extension of conformal prediction, and a Vision-Language Model (VLM) performs semantic filtering to discriminate benign anomalies from genuine failures. We also introduce BotFails, a multimodal dataset of real-world tasks for failure detection in robotics. FIDeL consistently outperforms state-of-the-art baselines, yielding +5.30% percent AUROC in anomaly detection and +17.38% percent failure-detection accuracy on BotFails compared to existing methods.