CAAP: Capture-Aware Adversarial Patch Attacks on Palmprint Recognition Models
arXiv cs.CV / 4/9/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes CAAP, a capture-aware adversarial patch attack framework designed specifically for palmprint recognition systems used in security-critical access control and payments.
- CAAP learns a universal, reusable patch that remains effective under realistic physical acquisition variations, addressing limitations of prior “digital-only” adversarial research.
- By using a cross-shaped patch topology and combining modules for input-conditioned rendering (ASIT), stochastic capture simulation (RaS), and feature-level guidance (MS-DIFE), CAAP better disrupts palmprint ridge/crease texture continuity.
- Evaluations on Tongji, IITD, and AISEC show strong untargeted and targeted performance with good transferability across different model architectures and datasets.
- The authors find that adversarial training can only partially mitigate the attacks, leaving substantial residual vulnerability and motivating more robust defenses.
Related Articles

Black Hat Asia
AI Business
[R] The ECIH: Model Modeling Agentic Identity as an Emergent Relational State [R]
Reddit r/MachineLearning
Google DeepMind Unveils Project Genie: The Dawn of Infinite AI-Generated Game Worlds
Dev.to
Artificial Intelligence and Life in 2030: The One Hundred Year Study onArtificial Intelligence
Dev.to
Stop waiting for Java to rebuild! AI IDEs + Zero-Latency Hot Reload = Magic
Dev.to