EAD-Net: Emotion-Aware Talking Head Generation with Spatial Refinement and Temporal Coherence

arXiv cs.CV / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The article introduces EAD-Net, an emotion-aware diffusion-based framework for generating talking-head videos with both accurate lip synchronization and controllable facial emotional expressions.
  • It addresses limitations of prior emotion-label approaches by adding high-level semantic guidance extracted from a large language model, while mitigating lip-sync degradation via SyncNet supervision and Temporal Representation Alignment (TREPA).
  • For long video generation, EAD-Net improves global motion awareness and temporal stability by using Spatio-Temporal Directional Attention (STDA) with strip attention to capture long-range spatio-temporal dependencies.
  • It further enhances temporal coherence by explicitly reasoning across frames using a Temporal Frame graph Reasoning Module (TFRM) that learns graph structures between frames.
  • Experiments on HDTF and MEAD report improved performance over existing methods in lip-sync accuracy, temporal consistency, and emotional accuracy.

Abstract

Emotionally talking head video generation aims to generate expressive portrait videos with accurate lip synchronization and emotional facial expressions. Current methods rely on simple emotional labels, leading to insufficient semantic information. While introducing high-level semantics enhances expressiveness, it easily causes lip-sync degradation. Furthermore, mainstream generation methods struggle to balance computational efficiency and global motion awareness in long videos and suffer from poor temporal coherence. Therefore, we propose an \textbf{E}motion-\textbf{A}ware \textbf{D}iffusion model-based \textbf{Net}work, called \textbf{EAD-Net}. We introduce SyncNet supervision and Temporal Representation Alignment (TREPA) to mitigate lip-sync degradation caused by multi-modal fusion. To model complex spatio-temporal dependencies in long video sequences, we propose a Spatio-Temporal Directional Attention (STDA) mechanism that captures global motion patterns through strip attention. Additionally, we design a Temporal Frame graph Reasoning Module (TFRM) to explicitly model temporal coherence between video frames through graph structure learning. To enhance emotional semantic control, a large language model is employed to extract textual descriptions from real videos, serving as high-level semantic guidance. Experiments on the HDTF and MEAD datasets demonstrate that our method outperforms existing methods in terms of lip-sync accuracy, temporal consistency, and emotional accuracy.