AI Navigate

Training-free Motion Factorization for Compositional Video Generation

arXiv cs.CV / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a novel motion factorization framework for compositional video generation that decomposes complex motions into three categories: motionlessness, rigid motion, and non-rigid motion.
  • Their approach involves a two-step paradigm: planning motion laws on a motion graph to structure instance interactions and then generating video frames by disentangled modulation of each motion category.
  • The framework is model-agnostic and can be integrated into various diffusion model architectures, enhancing their ability to synthesize realistic and diverse motions.
  • Experimental results on real-world benchmarks demonstrate the method’s impressive performance, particularly in understanding and generating diverse motion patterns specified by user prompts.
  • The authors plan to release their code publicly to facilitate adoption and further research in compositional video generation.

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.09104 (cs)
[Submitted on 10 Mar 2026]

Title:Training-free Motion Factorization for Compositional Video Generation

View a PDF of the paper titled Training-free Motion Factorization for Compositional Video Generation, by Zixuan Wang and 6 other authors
View PDF HTML (experimental)
Abstract:Compositional video generation aims to synthesize multiple instances with diverse appearance and motion, which is widely applicable in real-world scenarios. However, current approaches mainly focus on binding semantics, neglecting to understand diverse motion categories specified in prompts. In this paper, we propose a motion factorization framework that decomposes complex motion into three primary categories: motionlessness, rigid motion, and non-rigid motion. Specifically, our framework follows a planning before generation paradigm. (1) During planning, we reason about motion laws on the motion graph to obtain frame-wise changes in the shape and position of each instance. This alleviates semantic ambiguities in the user prompt by organizing it into a structured representation of instances and their interactions. (2) During generation, we modulate the synthesis of distinct motion categories in a disentangled manner. Conditioned on the motion cues, guidance branches stabilize appearance in motionless regions, preserve rigid-body geometry, and regularize local non-rigid deformations. Crucially, our two modules are model-agnostic, which can be seamlessly incorporated into various diffusion model architectures. Extensive experiments demonstrate that our framework achieves impressive performance in motion synthesis on real-world benchmarks. Our code will be released soon.
Comments:
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2603.09104 [cs.CV]
  (or arXiv:2603.09104v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.09104
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Zixuan Wang [view email]
[v1] Tue, 10 Mar 2026 02:27:48 UTC (9,154 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CV
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.