InfiniteDance: Scalable 3D Dance Generation Towards in-the-wild Generalization
arXiv cs.CV / 3/17/2026
📰 NewsSignals & Early TrendsModels & Research
Key Points
- The work tackles generalizable 3D dance generation from monocular videos by scaling both data and model design to handle unseen music.
- It introduces the Foot Restoration Diffusion Model (FRDM) to enforce physical plausibility via foot-contact and geometric constraints, creating a 100.69-hour multimodal 3D dance dataset.
- It presents ChoreoLLaMA, a scalable LLaMA-based architecture with a retrieval-augmented generation module to reference dance prompts under unfamiliar music conditions.
- A slow/fast-cadence Mixture-of-Experts module lets the model adapt motion rhythms across different tempos.
- Experiments show improvements over existing methods, and the authors plan to release code, models, and data.
Related Articles
[R] Combining Identity Anchors + Permission Hierarchies achieves 100% refusal in abliterated LLMs — system prompt only, no fine-tuning
Reddit r/MachineLearning
The Demethylation
Dev.to
[P] Vibecoded on a home PC: building a ~2700 Elo browser-playable neural chess engine with a Karpathy-inspired AI-assisted research loop
Reddit r/MachineLearning
Meet DuckLLM 1.0 My First Model!
Reddit r/LocalLLaMA

95% of UK students now use AI and their experiences couldn't be more divided
THE DECODER