FODMP: Fast One-Step Diffusion of Movement Primitives Generation for Time-Dependent Robot Actions
arXiv cs.AI / 3/27/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses a limitation of diffusion-based robot learning: existing action-chunking diffusion policies are fast but only generate short, reactive motion segments, missing time-dependent movement primitives.
- It builds on Movement Primitive Diffusion (MPD), which uses ProDMPs to represent temporally structured trajectories, but MPD remains too slow because the motion decoder is embedded in a multi-step diffusion process.
- The authors propose FODMP, which distills diffusion models into the ProDMP trajectory parameter space and generates motions with a single-step decoder to remove the inference bottleneck.
- Experiments on MetaWorld and ManiSkill show FODMP can run up to 10× faster than MPD and 7× faster than action-chunking diffusion policies while maintaining or improving success rates.
- The framework also enables dynamic acceleration–deceleration primitives that improve real-time tasks such as intercepting and catching a fast-flying ball under closed-loop vision control.
広告
Related Articles

Got My 39-Agent System Audited Live. Here's What the Maturity Scorecard Revealed.
Dev.to

The Redline Economy
Dev.to

$500 GPU outperforms Claude Sonnet on coding benchmarks
Dev.to

From Scattershot to Sniper: AI for Hyper-Personalized Media Lists
Dev.to

The LiteLLM Supply Chain Attack: A Wake-Up Call for AI Infrastructure
Dev.to