MotionRFT: Unified Reinforcement Fine-Tuning for Text-to-Motion Generation

arXiv cs.CV / 3/31/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • MotionRFT proposes a reinforcement fine-tuning framework for text-to-motion generation that addresses gaps in supervised pretraining for goals like semantic consistency, realism, and human preference alignment.
  • The system uses MotionReward to unify heterogeneous motion representations into a shared semantic space anchored by text, enabling multi-dimensional reward learning and improved semantics via self-refinement preference learning without extra annotations.
  • To reduce the computational bottleneck from recursive gradient dependence across diffusion denoising steps, MotionRFT introduces EasyTune, which performs step-wise (not full-trajectory) optimization for dense, fine-grained, and memory-efficient updates.
  • Experiments show strong efficiency and quality improvements, including FID 0.132 with 22.10 GB peak memory on an MLD model, up to 15.22 GB memory savings over DRaFT, and reported FID/R-precision gains on joint-based ACMDM and rotation-based HY Motion.
  • The authors report that a public project page with code is available, supporting reproducibility and downstream adoption by researchers.

Abstract

Text-to-motion generation has advanced with diffusion- and flow-based generative models, yet supervised pretraining remains insufficient to align models with high-level objectives such as semantic consistency, realism, and human preference. Existing post-training methods have key limitations: they (1) target a specific motion representation, such as joints, (2) optimize a particular aspect, such as text-motion alignment, and may compromise other factors; and (3) incur substantial computational overhead, data dependence, and coarse-grained optimization. We present a reinforcement fine-tuning framework that comprises a heterogeneous-representation, multi-dimensional reward model, MotionReward, and an efficient, fine-grained fine-tuning method, EasyTune. To obtain a unified semantics representation, MotionReward maps heterogeneous motions into a shared semantic space anchored by text, enabling multidimensional reward learning; Self-refinement Preference Learning further enhances semantics without additional annotations. For efficient and effective fine-tuning, we identify the recursive gradient dependence across denoising steps as the key bottleneck, and propose EasyTune, which optimizes step-wise rather than over the full trajectory, yielding dense, fine-grained, and memory-efficient updates. Extensive experiments validate the effectiveness of our framework, achieving FID 0.132 at 22.10 GB peak memory for MLD model and saving up to 15.22 GB over DRaFT. It reduces FID by 22.9% on joint-based ACMDM, and achieves a 12.6% R-Precision gain and 23.3% FID improvement on rotation-based HY Motion. Our project page with code is publicly available.