Learn to Rank: Visual Attribution by Learning Importance Ranking

arXiv cs.CV / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a key interpretability problem in computer vision by generating visual attribution maps that show which input regions drive a model’s predictions, aiming to improve trust and accountability in safety-critical settings.
  • It argues that prior attribution methods suffer from a three-way trade-off: propagation methods are efficient but may be biased and architecture-specific, perturbation methods are causally grounded but expensive and often coarse for vision transformers, and learning-based explainers are fast but rely on surrogate or teacher-driven objectives.
  • The authors propose a learning approach that directly optimizes deletion and insertion metrics, reformulating ranking as permutation learning and using a differentiable Gumbel-Sinkhorn relaxation to enable end-to-end training despite non-differentiable sorting.
  • Their method trains using attribution-guided perturbations of the target model and yields dense, pixel-level attributions in a single forward pass, with an option for a few-step gradient refinement during inference.
  • Experiments indicate consistent quantitative gains and sharper, boundary-aligned explanations, particularly for transformer-based vision models.

Abstract

Interpreting the decisions of complex computer vision models is crucial to establish trust and accountability, especially in safety-critical domains. An established approach to interpretability is generating visual attribution maps that highlight regions of the input most relevant to the model's prediction. However, existing methods face a three-way trade-off. Propagation-based approaches are efficient, but they can be biased and architecture-specific. Meanwhile, perturbation-based methods are causally grounded, yet they are expensive and for vision transformers often yield coarse, patch-level explanations. Learning-based explainers are fast but usually optimize surrogate objectives or distill from heuristic teachers. We propose a learning scheme that instead optimizes deletion and insertion metrics directly. Since these metrics depend on non-differentiable sorting and ranking, we frame them as permutation learning and replace the hard sorting with a differentiable relaxation using Gumbel-Sinkhorn. This enables end-to-end training through attribution-guided perturbations of the target model. During inference, our method produces dense, pixel-level attributions in a single forward pass with optional, few-step gradient refinement. Our experiments demonstrate consistent quantitative improvements and sharper, boundary-aligned explanations, particularly for transformer-based vision models.