Benchmarking Vision Foundation Models for Domain-Generalizable Face Anti-Spoofing

arXiv cs.CV / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles face anti-spoofing by focusing on robust domain generalization to unseen environments under cross-domain benchmarks.
  • It argues that Vision-Language Model approaches can be computationally expensive and high-latency, motivating a vision-only foundation-model baseline.
  • The authors systematically benchmark 15 pre-trained vision models (supervised CNNs/ViTs and self-supervised ViTs) using the MICO and LSD protocols to stress-domain generalization.
  • Results show that self-supervised vision models—especially DINOv2 with Registers—best suppress attention artifacts and learn fine-grained spoofing cues.
  • By combining the tuned vision-only backbone with FAS-Aug, PDA, and APL, the method achieves state-of-the-art performance on MICO and strong results on LSD with better computational efficiency than prior approaches.

Abstract

Face Anti-Spoofing (FAS) remains challenging due to the requirement for robust domain generalization across unseen environments. While recent trends leverage Vision-Language Models (VLMs) for semantic supervision, these multimodal approaches often demand prohibitive computational resources and exhibit high inference latency. Furthermore, their efficacy is inherently limited by the quality of the underlying visual features. This paper revisits the potential of vision-only foundation models to establish a highly efficient and robust baseline for FAS. We conduct a systematic benchmarking of 15 pre-trained models, such as supervised CNNs, supervised ViTs, and self-supervised ViTs, under severe cross-domain scenarios including the MICO and Limited Source Domains (LSD) protocols. Our comprehensive analysis reveals that self-supervised vision models, particularly DINOv2 with Registers, significantly suppress attention artifacts and capture critical, fine-grained spoofing cues. Combined with Face Anti-Spoofing Data Augmentation (FAS-Aug), Patch-wise Data Augmentation (PDA) and Attention-weighted Patch Loss (APL), our proposed vision-only baseline achieves state-of-the-art performance in the MICO protocol. This baseline outperforms existing methods under the data-constrained LSD protocol while maintaining superior computational efficiency. This work provides a definitive vision-only baseline for FAS, demonstrating that optimized self-supervised vision transformers can serve as a backbone for both vision-only and future multimodal FAS systems. The project page is available at: https://gsisaoki.github.io/FAS-VFMbenchmark-CVPRW2026/ .