DynaVid: Learning to Generate Highly Dynamic Videos using Synthetic Motion Data

arXiv cs.CV / 4/3/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • DynaVid is introduced as a video synthesis framework targeting diffusion-based models’ difficulty with highly dynamic motion and fine-grained motion controllability.
  • The method addresses limited real training data by generating synthetic motion supervision using optical flow derived from computer-graphics pipelines, providing diverse motion patterns and precise control signals.
  • By training with motion represented as optical flow (decoupled from appearance), DynaVid aims to avoid the unnatural visual artifacts that can come from rendered synthetic videos.
  • The approach uses a two-stage pipeline—first synthesizing motion with a motion generator, then producing motion-guided video frames conditioned on that motion—to improve both controllability and realism.
  • Experiments on scenarios like vigorous human motion and extreme camera motion show improved realism and controllability compared with existing approaches, particularly where datasets are scarce.

Abstract

Despite recent progress, video diffusion models still struggle to synthesize realistic videos involving highly dynamic motions or requiring fine-grained motion controllability. A central limitation lies in the scarcity of such examples in commonly used training datasets. To address this, we introduce DynaVid, a video synthesis framework that leverages synthetic motion data in training, which is represented as optical flow and rendered using computer graphics pipelines. This approach offers two key advantages. First, synthetic motion offers diverse motion patterns and precise control signals that are difficult to obtain from real data. Second, unlike rendered videos with artificial appearances, rendered optical flow encodes only motion and is decoupled from appearance, thereby preventing models from reproducing the unnatural look of synthetic videos. Building on this idea, DynaVid adopts a two-stage generation framework: a motion generator first synthesizes motion, and then a motion-guided video generator produces video frames conditioned on that motion. This decoupled formulation enables the model to learn dynamic motion patterns from synthetic data while preserving visual realism from real-world videos. We validate our framework on two challenging scenarios, vigorous human motion generation and extreme camera motion control, where existing datasets are particularly limited. Extensive experiments demonstrate that DynaVid improves the realism and controllability in dynamic motion generation and camera motion control.