Bilingual Text-to-Motion Generation: A New Benchmark and Baselines

arXiv cs.CL / 3/27/2026

💬 OpinionSignals & Early TrendsModels & Research

Key Points

  • This paper introduces BiHumanML3D, described as the first bilingual benchmark for text-to-motion generation, addressing prior gaps in bilingual datasets and cross-lingual semantic understanding.
  • The benchmark is created using LLM-assisted annotation followed by rigorous manual correction to improve dataset reliability.
  • It proposes Bilingual Motion Diffusion (BiMD) with Cross-Lingual Alignment (CLA), which explicitly aligns semantic representations across languages to form a robust conditional space for motion synthesis.
  • Experiments on BiHumanML3D show BiMD+CLA substantially improves results (e.g., FID 0.045 vs. 0.169; R@3 82.8% vs. 80.8%) over monolingual diffusion and translation-based baselines, including zero-shot code-switching.
  • The authors report releasing the dataset and code publicly, enabling follow-up research on bilingual and cross-lingual text-to-motion methods.

Abstract

Text-to-motion generation holds significant potential for cross-linguistic applications, yet it is hindered by the lack of bilingual datasets and the poor cross-lingual semantic understanding of existing language models. To address these gaps, we introduce BiHumanML3D, the first bilingual text-to-motion benchmark, constructed via LLM-assisted annotation and rigorous manual correction. Furthermore, we propose a simple yet effective baseline, Bilingual Motion Diffusion (BiMD), featuring Cross-Lingual Alignment (CLA). CLA explicitly aligns semantic representations across languages, creating a robust conditional space that enables high-quality motion generation from bilingual inputs, including zero-shot code-switching scenarios. Extensive experiments demonstrate that BiMD with CLA achieves an FID of 0.045 vs. 0.169 and R@3 of 82.8\% vs. 80.8\%, significantly outperforms monolingual diffusion models and translation baselines on BiHumanML3D, underscoring the critical necessity and reliability of our dataset and the effectiveness of our alignment strategy for cross-lingual motion synthesis. The dataset and code are released at \href{https://wengwanjiang.github.io/BilingualT2M-page}{https://wengwanjiang.github.io/BilingualT2M-page}