Interpretable facial dynamics as behavioral and perceptual traces of deepfakes

arXiv cs.CV / 4/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes an interpretable deepfake-detection approach based on low-dimensional bio-behavioral features of facial dynamics, rather than relying solely on opaque deep learning models.
  • Using temporal and spatiotemporal features derived from core facial movement patterns, traditional classifiers achieve above-chance deepfake discrimination, with stronger signals from higher-order temporal irregularities in manipulated videos.
  • Detection performance is notably better for videos with emotive expressions, and additional analysis suggests deepfakes systematically degrade emotional valence cues.
  • The study compares model decisions with human perceptual judgments, finding convergence for emotive content but divergence for non-emotive content, indicating that explainable computational features may be complementary to human perception.
  • Overall, face-swapped deepfakes exhibit a measurable behavioral fingerprint that is most evident during emotional expression, informing both detection and explainability research.

Abstract

Deepfake detection research has largely converged on deep learning approaches that, despite strong benchmark performance, offer limited insight into what distinguishes real from manipulated facial behavior. This study presents an interpretable alternative grounded in bio-behavioral features of facial dynamics and evaluates how computational detection strategies relate to human perceptual judgments. We identify core low-dimensional patterns of facial movement, from which temporal features characterizing spatiotemporal structure were derived. Traditional machine learning classifiers trained on these features achieved modest but significant above-chance deepfake classification, driven by higher-order temporal irregularities that were more pronounced in manipulated than real facial dynamics. Notably, detection was substantially more accurate for videos containing emotive expressions than those without. An emotional valence classification analysis further indicated that emotive signals are systematically degraded in deepfakes, explaining the differential impact of emotive dynamics on detection. Furthermore, we provide an additional and often overlooked dimension of explainability by assessing the relationship between model decisions and human perceptual detection. Model and human judgments converged for emotive but diverged for non-emotive videos, and even where outputs aligned, underlying detection strategies differed. These findings demonstrate that face-swapped deepfakes carry a measurable behavioral fingerprint, most salient during emotional expression. Additionally, model-human comparisons suggest that interpretable computational features and human perception may offer complementary rather than redundant routes to detection.