MMTalker: Multiresolution 3D Talking Head Synthesis with Multimodal Feature Fusion

arXiv cs.CV / 4/6/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • MMTalker is a new audio-driven 3D talking-head synthesis approach that maps 1D speech signals to time-varying 3D facial motion while addressing lip-sync and expression realism issues.
  • The method builds a continuous 3D face representation using mesh parameterization with UV-to-mesh correspondence and differentiable non-uniform sampling to better capture fine facial details.
  • It extracts motion features via a residual graph convolutional network combined with a dual cross-attention mechanism for multimodal feature fusion (hierarchical speech features plus spatiotemporal geometric mesh features).
  • A lightweight regression module then predicts vertex-wise geometric displacements by jointly processing sampled points in canonical UV space and the encoded motion features.
  • Experiments report significant improvements over prior work, particularly in synchronization accuracy for lip and eye movements.

Abstract

Speech-driven three-dimensional (3D) facial animation synthesis aims to build a mapping from one-dimensional (1D) speech signals to time-varying 3D facial motion signals. Current methods still face challenges in maintaining lip-sync accuracy and producing realistic facial expressions, primarily due to the highly ill-posed nature of this cross-modal mapping. In this paper, we introduce a novel 3D audio-driven facial animation synthesis method through multi-resolution representation and multi-modal feature fusion, called MMTalker which can accurately reconstruct the rich details of 3D facial motion. We first achieve the continuous representation of 3D face with details by mesh parameterization and non-uniform differentiable sampling. The mesh parameterization technique establishes the correspondence between UV plane and 3D facial mesh and is used to offer ground truth for the continuous learning. Differentiable non-uniform sampling enables precise facial detail acquisition by setting learnable sampling probability in each triangular face. Next, we employ residual graph convolutional network and dual cross-attention mechanism to extract discriminative facial motion feature from multiple input modalities. This proposed multimodal fusion strategy takes full use of the hierarchical features of speech and the explicit spatiotemporal geometric features of facial mesh. Finally, a lightweight regression network predicts the vertex-wise geometric displacements of the synthesized talking face by jointly processing the sampled points in the canonical UV space and the encoded facial motion features. Comprehensive experiments demonstrate that significant improvements are achieved over state-of-the-art methods, especially in the synchronization accuracy of lip and eye movements.