Attribution Upsampling should Redistribute, Not Interpolate
arXiv cs.CV / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Attribution methods in explainable AI rely on upsampling designed for natural images, and standard interpolation methods like bilinear and bicubic corrupt attribution signals, leading to spurious high-importance regions.
- The core issue is treating attribution upsampling as an interpolation problem isolated from the model's reasoning, rather than as a mass redistribution guided by the model's semantic boundaries.
- The authors introduce Universal Semantic-Aware Upsampling (USU), a ratio-form mass redistribution operator that preserves attribution mass and relative importance, and formalize four desiderata for faithful upsampling while proving interpolation violates three of them.
- Empirical results on ImageNet, CIFAR-10, and CUB-200 show USU improves faithfulness and yields qualitatively more coherent explanations across models with known attribution priors.
Related Articles
[R] Combining Identity Anchors + Permission Hierarchies achieves 100% refusal in abliterated LLMs — system prompt only, no fine-tuning
Reddit r/MachineLearning
How I Built an AI SDR Agent That Finds Leads and Writes Personalized Cold Emails
Dev.to
Complete Guide: How To Make Money With Ai
Dev.to
I Analyzed My Portfolio with AI and Scored 53/100 — Here's How I Fixed It to 85+
Dev.to
The Demethylation
Dev.to