SEDTalker: Emotion-Aware 3D Facial Animation Using Frame-Level Speech Emotion Diarization

arXiv cs.CV / 4/16/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • SEDTalker is introduced as an emotion-aware, speech-driven framework for generating 3D facial animations with fine-grained control over expressions over time.
  • Instead of using utterance-level or manually provided emotion labels, it performs frame-level speech emotion diarization to predict temporally dense emotion categories and intensities directly from audio.
  • The predicted emotion signals are converted into learned embeddings and used to condition a hybrid Transformer–Mamba architecture for expressive 3D talking-head generation.
  • The approach aims to disentangle linguistic content from emotional style while maintaining identity preservation and temporal coherence in the resulting animation.
  • Experiments on multi-corpus emotion diarization data and on EmoVOCA show strong frame-level emotion recognition plus low geometric and temporal reconstruction errors, with smooth emotion transitions in qualitative results.

Abstract

We introduce SEDTalker, an emotion-aware framework for speech-driven 3D facial animation that leverages frame-level speech emotion diarization to achieve fine-grained expressive control. Unlike prior approaches that rely on utterance-level or manually specified emotion labels, our method predicts temporally dense emotion categories and intensities directly from speech, enabling continuous modulation of facial expressions over time. The diarized emotion signals are encoded as learned embeddings and used to condition a speech-driven 3D animation model based on a hybrid Transformer-Mamba architecture. This design allows effective disentanglement of linguistic content and emotional style while preserving identity and temporal coherence. We evaluate our approach on a large-scale multi-corpus dataset for speech emotion diarization and on the EmoVOCA dataset for emotional 3D facial animation. Quantitative results demonstrate strong frame-level emotion recognition performance and low geometric and temporal reconstruction errors, while qualitative results show smooth emotion transitions and consistent expression control. These findings highlight the effectiveness of frame-level emotion diarization for expressive and controllable 3D talking head generation.