Attribution Upsampling should Redistribute, Not Interpolate
arXiv cs.CV / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Attribution methods in explainable AI rely on upsampling designed for natural images, and standard interpolation methods like bilinear and bicubic corrupt attribution signals, leading to spurious high-importance regions.
- The core issue is treating attribution upsampling as an interpolation problem isolated from the model's reasoning, rather than as a mass redistribution guided by the model's semantic boundaries.
- The authors introduce Universal Semantic-Aware Upsampling (USU), a ratio-form mass redistribution operator that preserves attribution mass and relative importance, and formalize four desiderata for faithful upsampling while proving interpolation violates three of them.
- Empirical results on ImageNet, CIFAR-10, and CUB-200 show USU improves faithfulness and yields qualitatively more coherent explanations across models with known attribution priors.




