Not All Frames Are Equal: Complexity-Aware Masked Motion Generation via Motion Spectral Descriptors

arXiv cs.CV / 4/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that masked generative models for text-to-motion treat all motion frames too uniformly, even though motion dynamics vary sharply over time and this leads to disproportionate degradation on complex segments.
  • It introduces the Motion Spectral Descriptor (MSD), a deterministic, parameter-free measure of local dynamic complexity computed from the short-time spectrum of motion velocity, designed to be interpretable and derived directly from the motion signal.
  • The proposed DynMask method uses MSD to make masked motion generation complexity-aware by guiding content-focused masking during training, adding a spectral similarity prior for self-attention, and optionally modulating token-level sampling during iterative decoding.
  • Experiments show that DynMask improves generation most clearly on dynamically complex motions and achieves stronger overall FID on HumanML3D and KIT-ML, supporting the design principle of respecting local motion complexity for masked motion generation.

Abstract

Masked generative models have become a strong paradigm for text-to-motion synthesis, but they still treat motion frames too uniformly during masking, attention, and decoding. This is a poor match for motion, where local dynamic complexity varies sharply over time. We show that current masked motion generators degrade disproportionately on dynamically complex motions, and that frame-wise generation error is strongly correlated with motion dynamics. Motivated by this mismatch, we introduce the Motion Spectral Descriptor (MSD), a simple and parameter-free measure of local dynamic complexity computed from the short-time spectrum of motion velocity. Unlike learned difficulty predictors, MSD is deterministic, interpretable, and derived directly from the motion signal itself. We use MSD to make masked motion generation complexity-aware. In particular, MSD guides content-focused masking during training, provides a spectral similarity prior for self-attention, and can additionally modulate token-level sampling during iterative decoding. Built on top of masked motion generators, our method, DynMask, improves motion generation most clearly on dynamically complex motions while also yielding stronger overall FID on HumanML3D and KIT-ML. These results suggest that respecting local motion complexity is a useful design principle for masked motion generation. Project page: https://xiangyue-zhang.github.io/DynMask