Learn to Rank: Visual Attribution by Learning Importance Ranking
arXiv cs.CV / 4/8/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses a key interpretability problem in computer vision by generating visual attribution maps that show which input regions drive a model’s predictions, aiming to improve trust and accountability in safety-critical settings.
- It argues that prior attribution methods suffer from a three-way trade-off: propagation methods are efficient but may be biased and architecture-specific, perturbation methods are causally grounded but expensive and often coarse for vision transformers, and learning-based explainers are fast but rely on surrogate or teacher-driven objectives.
- The authors propose a learning approach that directly optimizes deletion and insertion metrics, reformulating ranking as permutation learning and using a differentiable Gumbel-Sinkhorn relaxation to enable end-to-end training despite non-differentiable sorting.
- Their method trains using attribution-guided perturbations of the target model and yields dense, pixel-level attributions in a single forward pass, with an option for a few-step gradient refinement during inference.
- Experiments indicate consistent quantitative gains and sharper, boundary-aligned explanations, particularly for transformer-based vision models.
Related Articles
[N] Just found out that Milla Jovovich is a dev, invested in AI, and just open sourced a project
Reddit r/MachineLearning

ALTK‑Evolve: On‑the‑Job Learning for AI Agents
Hugging Face Blog

Context Windows Are Getting Absurd — And That's a Good Thing
Dev.to
Google isn’t an AI-first company despite Gemini being great
Reddit r/artificial

GitHub Weekly: Copilot SDK Goes Public, Cloud Agent Breaks Free
Dev.to