Refinement of Accelerated Demonstrations via Incremental Iterative Reference Learning Control for Fast Contact-Rich Imitation Learning
arXiv cs.RO / 4/21/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper tackles how to generate fast demonstrations for contact-rich manipulation imitation learning, noting that naive time acceleration distorts contact dynamics and increases tracking errors.
- It introduces Incremental Iterative Reference Learning Control (I2RLC), which adapts the reference trajectory using IRLC while gradually increasing execution speed to improve stability and fidelity.
- Experiments on real robots (whiteboard erasing and peg-in-hole) show that both IRLC and I2RLC can produce up to 10x faster demonstrations with lower tracking error, while I2RLC achieves about 22.5% better spatial similarity to the original trajectories versus IRLC.
- Using the refined trajectories to train imitation learning policies yields faster execution and 100% success on peg-in-hole for both seen and unseen positions, with I2RLC-trained policies generating lower contact forces than IRLC-trained ones.
- Overall, the results suggest that combining incremental speed scheduling with reference adaptation is an effective approach for practical fast contact-rich imitation learning.
Related Articles

Every time a new model comes out, the old one is obsolete of course
Reddit r/LocalLLaMA

We built it during the NVIDIA DGX Spark Full-Stack AI Hackathon — and it ended up winning 1st place overall 🏆
Dev.to

Stop Losing Progress: Setting Up a Pro Jupyter Workflow in VS Code (No More Colab Timeouts!)
Dev.to

Building AgentOS: Why I’m Building the AWS Lambda for Insurance Claims
Dev.to

Where we are. In a year, everything has changed. Kimi - Minimax - Qwen - Gemma - GLM
Reddit r/LocalLLaMA