ESPADA: Execution Speedup via Semantics Aware Demonstration Data Downsampling for Imitation Learning

arXiv cs.RO / 4/28/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ESPADA, a semantic- and spatially aware approach to downsample demonstration data for behavior-cloning visuomotor policies, aiming to remove overly cautious human timing without losing accuracy.
  • ESPADA segments demonstrations using a VLM-LLM pipeline that leverages 3D gripper–object relations, allowing aggressive downsampling in non-critical segments while preserving precision-critical phases.
  • It does not require additional data, architectural changes, or any retraining, and it scales from a single annotated episode to the full dataset by propagating segment labels with Dynamic Time Warping (DTW) using dynamics-only features.
  • Experiments in both simulation and real-world settings (with ACT and DP baselines) show about a 2× speed-up while maintaining success rates, reducing the performance gap between human demonstrations and efficient robot control.

Abstract

Behavior-cloning based visuomotor policies enable precise manipulation but often inherit the slow, cautious tempo of human demonstrations, limiting practical deployment. However, prior studies on acceleration methods mainly rely on statistical or heuristic cues that ignore task semantics and can fail across diverse manipulation settings. We present ESPADA, a semantic and spatially aware framework that segments demonstrations using a VLM-LLM pipeline with 3D gripper-object relations, enabling aggressive downsampling only in non-critical segments while preserving precision-critical phases, without requiring extra data or architectural modifications, or any form of retraining. To scale from a single annotated episode to the full dataset, ESPADA propagates segment labels via Dynamic Time Warping (DTW) on dynamics-only features. Across both simulation and real-world experiments with ACT and DP baselines, ESPADA achieves approximately a 2x speed-up while maintaining success rates, narrowing the gap between human demonstrations and efficient robot control.