Learning Spatial-Temporal Coherent Correlations for Speech-Preserving Facial Expression Manipulation
arXiv cs.CV / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses speech-preserving facial expression manipulation (SPFEM), where emotions are modified while keeping mouth/phonation-related facial motion consistent with the spoken content.
- It shows that even when paired training data is unavailable for the same person, speakers expressing the same content with different emotions share highly correlated local facial animations in both spatial and temporal domains.
- The proposed STCCL (Spatial-Temporal Coherent Correlation Learning) algorithm turns these correlations into explicit metrics and uses them to supervise expression manipulation while improving preservation of speech-related facial animation.
- STCCL learns separate spatial and temporal coherent correlation metrics and adds a correlation-aware adaptive strategy that focuses training on harder regions.
- During training, the method constructs spatial-temporal coherent correlation losses between corresponding local regions of input and generated output frames to guide the SPFEM model.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to