Multimodal Models Meet Presentation Attack Detection on ID Documents

arXiv cs.CV / 4/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes integrating multimodal models into Presentation Attack Detection (PAD) for ID documents to better resist spoofing attacks that traditional visual-only systems may miss.
  • It uses pre-trained multimodal systems (e.g., Paligemma, LLaVA, and Qwen) to combine visual deep embeddings with textual/document metadata such as document type, issuer, and date.
  • Experimental findings suggest that, despite the multimodal fusion approach, these models still struggle to accurately detect PAD on ID documents.
  • The work highlights both the potential and current limitations of applying general-purpose multimodal LLM/Vision models to specialized biometric security tasks like PAD.

Abstract

The integration of multimodal models into Presentation Attack Detection (PAD) for ID Documents represents a significant advancement in biometric security. Traditional PAD systems rely solely on visual features, which often fail to detect sophisticated spoofing attacks. This study explores the combination of visual and textual modalities by utilizing pre-trained multimodal models, such as Paligemma, Llava, and Qwen, to enhance the detection of presentation attacks on ID Documents. This approach merges deep visual embeddings with contextual metadata (e.g., document type, issuer, and date). However, experimental results indicate that these models struggle to accurately detect PAD on ID Documents.