ATTN-FIQA: Interpretable Attention-based Face Image Quality Assessment with Vision Transformers

arXiv cs.CV / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ATTN-FIQA, a training-free face image quality assessment method that uses interpretability from Vision Transformer attention.
  • It tests the hypothesis that pre-softmax attention magnitudes from pre-trained face recognition ViT models reflect image quality, with high-quality faces producing focused, high-magnitude attention and degraded faces producing diffuse, low-magnitude attention.
  • ATTN-FIQA computes image-level quality scores by extracting pre-softmax attention matrices from the final transformer block, aggregating multi-head attention across patches, and averaging without any architectural changes or extra learning.
  • Experiments across eight benchmark datasets and four face recognition models show that the attention-derived quality scores correlate with face image quality and also indicate which facial regions contribute most to the assessment.

Abstract

Face Image Quality Assessment (FIQA) aims to assess the recognition utility of face samples and is essential for reliable face recognition (FR) systems. Existing approaches require computationally expensive procedures such as multiple forward passes, backpropagation, or additional training, and only recent work has focused on the use of Vision Transformers. Recent studies highlighted that these architectures inherently function as saliency learners with attention patterns naturally encoding spatial importance. This work proposes ATTN-FIQA, a novel training-free approach that investigates whether pre-softmax attention scores from pre-trained Vision Transformer-based face recognition models can serve as quality indicators. We hypothesize that attention magnitudes intrinsically encode quality: high-quality images with discriminative facial features enable strong query-key alignments producing focused, high-magnitude attention patterns, while degraded images generate diffuse, low-magnitude patterns. ATTN-FIQA extracts pre-softmax attention matrices from the final transformer block, aggregate multi-head attention information across all patches, and compute image-level quality scores through simple averaging, requiring only a single forward pass through pre-trained models without architectural modifications, backpropagation, or additional training. Through comprehensive evaluation across eight benchmark datasets and four FR models, this work demonstrates that attention-based quality scores effectively correlate with face image quality and provide spatial interpretability, revealing which facial regions contribute most to quality determination.