ST-GDance++: A Scalable Spatial-Temporal Diffusion for Long-Duration Group Choreography

arXiv cs.AI / 3/25/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies group choreography generation from music, focusing on maintaining spatial coordination across multiple dancers while preventing motion collisions over long sequences.
  • It argues that prior approaches suffer from quadratic attention costs as dancer count and sequence length grow, making interactive deployment difficult and coordination unstable.
  • ST-GDance++ is introduced as a scalable diffusion-based framework that decouples spatial and temporal dependencies to improve efficiency and robustness.
  • For spatial modeling, the method uses lightweight distance-aware graph convolutions to represent inter-dancer relationships with lower overhead.
  • For temporal modeling, it proposes a diffusion noise scheduling strategy plus an efficient temporal-aligned attention mask to support stream-based generation for long-duration motion, achieving reduced latency on the AIOZ-GDance dataset while maintaining competitive quality.

Abstract

Group dance generation from music requires synchronizing multiple dancers while maintaining spatial coordination, making it highly relevant to applications such as film production, gaming, and animation. Recent group dance generation models have achieved promising generation quality, but they remain difficult to deploy in interactive scenarios due to bidirectional attention dependencies. As the number of dancers and the sequence length increase, the attention computation required for aligning music conditions with motion sequences grows quadratically, leading to reduced efficiency and increased risk of motion collisions. Effectively modeling dense spatial-temporal interactions is therefore essential, yet existing methods often struggle to capture such complexity, resulting in limited scalability and unstable multi-dancer coordination. To address these challenges, we propose ST-GDance++, a scalable framework that decouples spatial and temporal dependencies to enable efficient and collision-aware group choreography generation. For spatial modeling, we introduce lightweight distance-aware graph convolutions to capture inter-dancer relationships while reducing computational overhead. For temporal modeling, we design a diffusion noise scheduling strategy together with an efficient temporal-aligned attention mask, enabling stream-based generation for long motion sequences and improving scalability in long-duration scenarios. Experiments on the AIOZ-GDance dataset show that ST-GDance++ achieves competitive generation quality with significantly reduced latency compared to existing methods.

ST-GDance++: A Scalable Spatial-Temporal Diffusion for Long-Duration Group Choreography | AI Navigate