AI Navigate

AU Codes, Language, and Synthesis: Translating Anatomy to Text for Facial Behavior Synthesis

arXiv cs.CV / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies limitations of current AU-based text-to-face methods that encode AUs as one-hot vectors, which struggle with conflicting AUs and can produce anatomically implausible artifacts.
  • It proposes describing facial action units with natural language to preserve expressive richness and allow explicit modeling of complex and conflicting expressions.
  • The authors introduce BP4D-AUText, a large-scale text-image paired dataset created by applying a Dynamic AU Text Processor to the BP4D and BP4D+ datasets.
  • They also present VQ-AUFace, a generative model that leverages facial structural priors to synthesize realistic and diverse facial behaviors from text, achieving superior performance in plausibility and perceptual realism, especially under conflicting AUs.

Abstract

Facial behavior synthesis remains a critical yet underexplored challenge. While text-to-face models have made progress, they often rely on coarse emotion categories, which lack the nuance needed to capture the full spectrum of human nonverbal communication. Action Units (AUs) provide a more precise and anatomically grounded alternative. However, current AU-based approaches typically encode AUs as one-hot vectors, modeling compound expressions as simple linear combinations of individual AUs. This linearity becomes problematic when handling conflicting AUs--defined as those which activate the same facial muscle with opposing actions. Such cases lead to anatomically implausible artifacts and unnatural motion superpositions. To address this, we propose a novel method that represents facial behavior through natural language descriptions of AUs. This approach preserves the expressiveness of the AU framework while enabling explicit modeling of complex and conflicting AUs. It also unlocks the potential of modern text-to-image models for high-fidelity facial synthesis. Supporting this direction, we introduce BP4D-AUText, the first large-scale text-image paired dataset for complex facial behavior. It is synthesized by applying a rule-based Dynamic AU Text Processor to the BP4D and BP4D+ datasets. We further propose VQ-AUFace, a generative model that leverages facial structural priors to synthesize realistic and diverse facial behaviors from text. Extensive quantitative experiments and user studies demonstrate that our approach significantly outperforms existing methods. It excels in generating facial expressions that are anatomically plausible, behaviorally rich, and perceptually convincing, particularly under challenging conditions involving conflicting AUs.