Coordinate-Based Dual-Constrained Autoregressive Motion Generation

arXiv cs.CV / 4/10/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes CDAMD (Coordinate-based Dual-constrained Autoregressive Motion Generation), a new text-to-motion framework designed to improve fidelity and semantic faithfulness over prior diffusion and autoregressive approaches.
  • It uses motion coordinates as inputs and combines an autoregressive generation paradigm with diffusion-inspired multi-layer perceptrons to reduce common autoregressive failure modes like mode collapse.
  • A “Dual-Constrained Causal Mask” is introduced to steer token-based autoregressive generation by treating motion tokens as priors concatenated with textual encodings.
  • The authors also introduce new benchmarks for both text-to-motion generation and motion editing, reporting state-of-the-art results on fidelity and semantic consistency.
  • By targeting coordinate-based motion synthesis and addressing error amplification and discretization issues, the work aims to make generated motions more usable for animation, VR, robotics, and HCI applications.

Abstract

Text-to-motion generation has attracted increasing attention in the research community recently, with potential applications in animation, virtual reality, robotics, and human-computer interaction. Diffusion and autoregressive models are two popular and parallel research directions for text-to-motion generation. However, diffusion models often suffer from error amplification during noise prediction, while autoregressive models exhibit mode collapse due to motion discretization. To address these limitations, we propose a flexible, high-fidelity, and semantically faithful text-to-motion framework, named Coordinate-based Dual-constrained Autoregressive Motion Generation (CDAMD). With motion coordinates as input, CDAMD follows the autoregressive paradigm and leverages diffusion-inspired multi-layer perceptrons to enhance the fidelity of predicted motions. Furthermore, a Dual-Constrained Causal Mask is introduced to guide autoregressive generation, where motion tokens act as priors and are concatenated with textual encodings. Since there is limited work on coordinate-based motion synthesis, we establish new benchmarks for both text-to-motion generation and motion editing. Experimental results demonstrate that our approach achieves state-of-the-art performance in terms of both fidelity and semantic consistency on these benchmarks.