AI Navigate

InfiniteDance: Scalable 3D Dance Generation Towards in-the-wild Generalization

arXiv cs.CV / 3/17/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The work tackles generalizable 3D dance generation from monocular videos by scaling both data and model design to handle unseen music.
  • It introduces the Foot Restoration Diffusion Model (FRDM) to enforce physical plausibility via foot-contact and geometric constraints, creating a 100.69-hour multimodal 3D dance dataset.
  • It presents ChoreoLLaMA, a scalable LLaMA-based architecture with a retrieval-augmented generation module to reference dance prompts under unfamiliar music conditions.
  • A slow/fast-cadence Mixture-of-Experts module lets the model adapt motion rhythms across different tempos.
  • Experiments show improvements over existing methods, and the authors plan to release code, models, and data.

Abstract

Although existing 3D dance generation methods perform well in controlled scenarios, they often struggle to generalize in the wild. When conditioned on unseen music, existing methods often produce unstructured or physically implausible dance, largely due to limited music-to-dance data and restricted model capacity. This work aims to push the frontier of generalizable 3D dance generation by scaling up both data and model design. (1) On the data side, we develop a fully automated pipeline that reconstructs high-fidelity 3D dance motions from monocular videos. To eliminate the physical artifacts prevalent in existing reconstruction methods, we introduce a Foot Restoration Diffusion Model (FRDM) guided by foot-contact and geometric constraints that enforce physical plausibility while preserving kinematic smoothness and expressiveness, resulting in a diverse, high-quality multimodal 3D dance dataset totaling 100.69 hours. (2) On model design, we propose Choreographic LLaMA (ChoreoLLaMA), a scalable LLaMA-based architecture. To enhance robustness under unfamiliar music conditions, we integrate a retrieval-augmented generation (RAG) module that injects reference dance as a prompt. Additionally, we design a slow/fast-cadence Mixture-of-Experts (MoE) module that enables ChoreoLLaMA to smoothly adapt motion rhythms across varying music tempos. Extensive experiments across diverse dance genres show that our approach surpasses existing methods in both qualitative and quantitative evaluations, marking a step toward scalable, real-world 3D dance generation. Code, models, and data will be released.