Generalized Disguise Makeup Presentation Attack Detection Using an Attention-Guided Patch-Based Framework
arXiv cs.CV / 4/30/2026
📰 NewsSignals & Early TrendsModels & Research
Key Points
- The paper addresses the challenge of disguise makeup presentation attacks that can fool facial recognition systems by using realistic cosmetics, prosthetics, and materials.
- It proposes a generalized detection framework with a two-phase approach: a style-invariant full-face model generates region attention (via Grad-CAM) and a patch-based stage performs localized, region-specific analysis.
- The method uses metric learning and a whitening transformation to improve discrimination and reduce sensitivity to stylistic variations in faces.
- A new real-world dataset is introduced, containing live and disguise makeup faces with broad variation in subjects, environments, and disguise materials.
- Experiments show strong cross-dataset generalization, reporting 8.97% ACER and 9.76% EER on the new dataset and very low error rates on SIW-Mv2 spoof categories.
Related Articles

Chinese firms face pressure on AI investments as US peers’ spending keeps soaring
SCMP Tech
The Prompt Caching Mistake That's Costing You 70% More Than You Need to Pay
Dev.to
We Built a DNS-Based Discovery Protocol for AI Agents — Here's How It Works
Dev.to
Your first business opportunity in 3 commands: /register_directory in @biznode_bot, wait for matches, then /my_pulse to view...
Dev.to
Building AI Evaluation Pipelines: Automating LLM Testing from Dataset to CI/CD
Dev.to