AI Navigate

ActionPlan: Future-Aware Streaming Motion Synthesis via Frame-Level Action Planning

arXiv cs.CV / 3/17/2026

📰 NewsModels & Research

Key Points

  • ActionPlan introduces a per-frame action plan with frame-level text latents that act as dense semantic anchors during denoising, enabling structured motion generation.
  • The framework enables real-time streaming by using history-conditioned, future-aware diffusion with latent-specific steps, while also supporting high-quality offline motion generation within a single model.
  • It supports zero-shot motion editing and in-betweening without additional models, increasing flexibility for post-hoc adjustments and interpolation.
  • Empirical results show real-time streaming runs 5.25x faster and achieves an 18% improvement in motion quality (FID) over the best previous method.

Abstract

We present ActionPlan, a unified motion diffusion framework that bridges real-time streaming with high-quality offline generation within a single model. The core idea is to introduce a per-frame action plan: the model predicts frame-level text latents that act as dense semantic anchors throughout denoising, and uses them to denoise the full motion sequence with combined semantic and motion cues. To support this structured workflow, we design latent-specific diffusion steps, allowing each motion latent to be denoised independently and sampled in flexible orders at inference. As a result, ActionPlan can run in a history-conditioned, future-aware mode for real-time streaming, while also supporting high-quality offline generation. The same mechanism further enables zero-shot motion editing and in-betweening without additional models. Experiments demonstrate that our real-time streaming is 5.25x faster while also achieving 18% motion quality improvement over the best previous method in terms of FID.