AI Navigate

TDMM-LM: Bridging Facial Understanding and Animation via Language Models

arXiv cs.CV / 3/19/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The authors leverage foundation generative models to synthesize about 80 hours of facial videos with a prompt suite covering emotions and head motions, and fit per-frame 3D facial parameters to create large-scale prompt-and-parameter training data.
  • They define two bidirectional tasks—Motion2Language and Language2Motion—that map between sequences of 3D facial parameters and natural-language descriptions or prompts to enable text-conditioned animation.
  • Extensive experiments show that language models can both interpret and synthesize facial motion with strong generalization, effectively casting facial-parameter modeling as a language problem.
  • The work establishes a unified path for text-conditioned facial animation and motion understanding, potentially transforming how animation pipelines approach data generation and cross-modal reasoning.

Abstract

Text-guided human body animation has advanced rapidly, yet facial animation lags due to the scarcity of well-annotated, text-paired facial corpora. To close this gap, we leverage foundation generative models to synthesize a large, balanced corpus of facial behavior. We design prompts suite covering emotions and head motions, generate about 80 hours of facial videos with multiple generators, and fit per-frame 3D facial parameters, yielding large-scale (prompt and parameter) pairs for training. Building on this dataset, we probe language models for bidirectional competence over facial motion via two complementary tasks: (1) Motion2Language: given a sequence of 3D facial parameters, the model produces natural-language descriptions capturing content, style, and dynamics; and (2) Language2Motion: given a prompt, the model synthesizes the corresponding sequence of 3D facial parameters via quantized motion tokens for downstream animation. Extensive experiments show that in this setting language models can both interpret and synthesize facial motion with strong generalization. To best of our knowledge, this is the first work to cast facial-parameter modeling as a language problem, establishing a unified path for text-conditioned facial animation and motion understanding.