SEDTalker: Emotion-Aware 3D Facial Animation Using Frame-Level Speech Emotion Diarization
arXiv cs.CV / 4/16/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- SEDTalker is introduced as an emotion-aware, speech-driven framework for generating 3D facial animations with fine-grained control over expressions over time.
- Instead of using utterance-level or manually provided emotion labels, it performs frame-level speech emotion diarization to predict temporally dense emotion categories and intensities directly from audio.
- The predicted emotion signals are converted into learned embeddings and used to condition a hybrid Transformer–Mamba architecture for expressive 3D talking-head generation.
- The approach aims to disentangle linguistic content from emotional style while maintaining identity preservation and temporal coherence in the resulting animation.
- Experiments on multi-corpus emotion diarization data and on EmoVOCA show strong frame-level emotion recognition plus low geometric and temporal reconstruction errors, with smooth emotion transitions in qualitative results.
Related Articles

Black Hat Asia
AI Business

Introducing Claude Opus 4.7
Anthropic News

AI traffic to US retailers rose 393% in Q1, and it’s boosting their revenue too
TechCrunch

Who Audits the Auditors? Building an LLM-as-a-Judge for Agentic Reliability
Dev.to

"Enterprise AI Cost Optimization: How Companies Are Cutting AI Infrastructure Sp
Dev.to