Motion-Adapter: A Diffusion Model Adapter for Text-to-Motion Generation of Compound Actions

arXiv cs.CV / 4/20/2026

📰 NewsModels & Research

Key Points

  • The paper argues that existing text-to-motion diffusion models struggle with compound actions because they suffer from “catastrophic neglect” of earlier temporal segments and “attention collapse” from overly aggressive feature fusion in cross-attention.
  • Prior workaround methods (using very detailed text, explicit body-part edits, or LLM-based body-part interpretation) still produce weak semantic representations of physical structure and kinematics, which limits natural behaviors in complex scenarios.
  • The proposed Motion-Adapter is a plug-and-play module that improves compound action generation by computing decoupled cross-attention maps and using them as structural masks during the diffusion denoising process.
  • Experiments reported in the work show that Motion-Adapter generates more faithful, coherent full-body compound motions across varied text prompts and outperforms state-of-the-art methods.

Abstract

Recent advances in generative motion synthesis have enabled the production of realistic human motions from diverse input modalities. However, synthesizing compound actions from texts, which integrate multiple concurrent actions into coherent full-body sequences, remains a major challenge. We identify two key limitations in current text-to-motion diffusion models: (i) catastrophic neglect, where earlier actions are overwritten by later ones due to improper handling of temporal information, and (ii) attention collapse, which arises from excessive feature fusion in cross-attention mechanisms. As a result, existing approaches often depend on overly detailed textual descriptions (e.g., raising right hand), explicit body-part specifications (e.g., editing the upper body), or the use of large language models (LLMs) for body-part interpretation. These strategies lead to deficient semantic representations of physical structures and kinematic mechanisms, limiting the ability to incorporate natural behaviors such as greeting while walking. To address these issues, we propose the Motion-Adapter, a plug-and-play module that guides text-to-motion diffusion models in generating compound actions by computing decoupled cross-attention maps, which serve as structural masks during the denoising process. Extensive experiments demonstrate that our method consistently produces more faithful and coherent compound motions across diverse textual prompts, surpassing state-of-the-art approaches.