Realistic Lip Motion Generation Based on 3D Dynamic Viseme and Coarticulation Modeling for Human-Robot Interaction

arXiv cs.RO / 4/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a speech-driven lip motion generation framework for humanoid robots using 3D dynamic viseme modeling and coarticulation to achieve realistic lip synchronization for non-verbal interaction.
  • It builds a coherent 3D dynamic viseme library aligned with the ARKit standard by leveraging Chinese pronunciation theory, aiming to provide reliable prior trajectories for continuous speech.
  • To address motion conflicts in continuous speech, it introduces a coarticulation mechanism combining initial–final (Shengmu–Yunmu) decoupling with energy modulation.
  • The method includes a retargeting strategy that maps high-dimensional spatial lip motion to a 14-DOF lip actuation system on a humanoid head platform and validates performance via ablation studies using PCC and MAJ metrics.
  • The authors release the 3D dynamic viseme library and deployment videos on GitHub, positioning the approach as lightweight and practical for real-world human-robot interaction scenarios.

Abstract

Realistic lip synchronization is essential for the natural human-robot non-verbal interaction of humanoid robots. Motivated by this need, this paper presents a lip motion generation framework based on 3D dynamic viseme and coarticulation modeling. By analyzing Chinese pronunciation theory, a 3D dynamic viseme library is constructed based on the ARKit standard, which offers coherent prior trajectories of lips. To resolve motion conflicts within continuous speech streams, a coarticulation mechanism is developed by incorporating initial-final (Shengmu-Yunmu) decoupling and energy modulation. After developing a strategy to retarget high-dimensional spatial lip motion to a 14-DOF lip actuation system of a humanoid head platform, the efficiency and accuracy of the proposed architecture is experimentally validated and demonstrated with quantitative ablation experiments using the metrics of the Pearson Correlation Coefficient (PCC) and the Mean Absolute Jerk (MAJ). This research offers a lightweight, efficient, and highly practical paradigm for the speech-driven lip motion generation of humanoid robots. The 3D dynamic viseme library and real-world deployment videos are available at {https://github.com/yuesheng21/Phoneme-to-Lip-14DOF}