Learning Spatial-Temporal Coherent Correlations for Speech-Preserving Facial Expression Manipulation

arXiv cs.CV / 4/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses speech-preserving facial expression manipulation (SPFEM), where emotions are modified while keeping mouth/phonation-related facial motion consistent with the spoken content.
  • It shows that even when paired training data is unavailable for the same person, speakers expressing the same content with different emotions share highly correlated local facial animations in both spatial and temporal domains.
  • The proposed STCCL (Spatial-Temporal Coherent Correlation Learning) algorithm turns these correlations into explicit metrics and uses them to supervise expression manipulation while improving preservation of speech-related facial animation.
  • STCCL learns separate spatial and temporal coherent correlation metrics and adds a correlation-aware adaptive strategy that focuses training on harder regions.
  • During training, the method constructs spatial-temporal coherent correlation losses between corresponding local regions of input and generated output frames to guide the SPFEM model.

Abstract

Speech-preserving facial expression manipulation (SPFEM) aims to modify facial emotions while meticulously maintaining the mouth animation associated with spoken content. Current works depend on inaccessible paired training samples for the person, where two aligned frames exhibit the same speech content yet differ in emotional expression, limiting the SPFEM applications in real-world scenarios. In this work, we discover that speakers who convey the same content with different emotions exhibit highly correlated local facial animations in both spatial and temporal spaces, providing valuable supervision for SPFEM. To capitalize on this insight, we propose a novel spatial-temporal coherent correlation learning (STCCL) algorithm, which models the aforementioned correlations as explicit metrics and integrates the metrics to supervise manipulating facial expression and meanwhile better preserving the facial animation of spoken content. To this end, it first learns a spatial coherent correlation metric, ensuring that the visual correlations of adjacent local regions within an image linked to a specific emotion closely resemble those of corresponding regions in an image linked to a different emotion. Simultaneously, it develops a temporal coherent correlation metric, ensuring that the visual correlations of specific regions across adjacent image frames associated with one emotion are similar to those in the corresponding regions of frames associated with another emotion. Recognizing that visual correlations are not uniform across all regions, we have also crafted a correlation-aware adaptive strategy that prioritizes regions that present greater challenges. During SPFEM model training, we construct the spatial-temporal coherent correlation metric between corresponding local regions of the input and output image frames as an additional loss to supervise the generation process.