Rethinking Cross-Domain Evaluation for Face Forgery Detection with Semantic Fine-grained Alignment and Mixture-of-Experts

arXiv cs.CV / 4/24/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current face forgery detectors underperform across datasets because evaluation metrics (notably cross-dataset AUC) fail to capture cross-domain score comparability issues.
  • It introduces Cross-AUC, a metric designed to compute AUC across dataset pairs by contrasting real samples from one dataset with fake samples from another (and vice versa), making score shifts across domains visible.
  • The authors find that applying Cross-AUC to representative detectors reveals significant performance drops, indicating an overlooked robustness problem in cross-domain evaluation.
  • They also propose SFAM (Semantic Fine-grained Alignment and Mixture-of-Experts), which uses a patch-level image-text alignment module to increase CLIP sensitivity to manipulation artifacts and a facial-region mixture-of-experts module for region-aware forgery analysis.
  • Experiments on public datasets show the proposed approach achieves better performance than state-of-the-art methods across multiple metrics.

Abstract

Nowadays, visual data forgery detection plays an increasingly important role in social and economic security with the rapid development of generative models. Existing face forgery detectors still can't achieve satisfactory performance because of poor generalization ability across datasets. The key factor that led to this phenomenon is the lack of suitable metrics: the commonly used cross-dataset AUC metric fails to reveal an important issue where detection scores may shift significantly across data domains. To explicitly evaluate cross-domain score comparability, we propose \textbf{Cross-AUC}, an evaluation metric that can compute AUC across dataset pairs by contrasting real samples from one dataset with fake samples from another (and vice versa). It is interesting to find that evaluating representative detectors under the Cross-AUC metric reveals substantial performance drops, exposing an overlooked robustness problem. Besides, we also propose the novel framework \textbf{S}emantic \textbf{F}ine-grained \textbf{A}lignment and \textbf{M}ixture-of-Experts (\textbf{SFAM}), consisting of a patch-level image-text alignment module that enhances CLIP's sensitivity to manipulation artifacts, and the facial region mixture-of-experts module, which routes features from different facial regions to specialized experts for region-aware forgery analysis. Extensive qualitative and quantitative experiments on the public datasets prove that the proposed method achieves superior performance compared with the state-of-the-art methods with various suitable metrics.