AI Navigate

Attribution Upsampling should Redistribute, Not Interpolate

arXiv cs.CV / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Attribution methods in explainable AI rely on upsampling designed for natural images, and standard interpolation methods like bilinear and bicubic corrupt attribution signals, leading to spurious high-importance regions.
  • The core issue is treating attribution upsampling as an interpolation problem isolated from the model's reasoning, rather than as a mass redistribution guided by the model's semantic boundaries.
  • The authors introduce Universal Semantic-Aware Upsampling (USU), a ratio-form mass redistribution operator that preserves attribution mass and relative importance, and formalize four desiderata for faithful upsampling while proving interpolation violates three of them.
  • Empirical results on ImageNet, CIFAR-10, and CUB-200 show USU improves faithfulness and yields qualitatively more coherent explanations across models with known attribution priors.

Abstract

Attribution methods in explainable AI rely on upsampling techniques that were designed for natural images, not saliency maps. Standard bilinear and bicubic interpolation systematically corrupts attribution signals through aliasing, ringing, and boundary bleeding, producing spurious high-importance regions that misrepresent model reasoning. We identify that the core issue is treating attribution upsampling as an interpolation problem that operates in isolation from the model's reasoning, rather than a mass redistribution problem where model-derived semantic boundaries must govern how importance flows. We present Universal Semantic-Aware Upsampling (USU), a principled method that reformulates upsampling through ratio-form mass redistribution operators, provably preserving attribution mass and relative importance ordering. Extending the axiomatic tradition of feature attribution to upsampling, we formalize four desiderata for faithful upsampling and prove that interpolation structurally violates three of them. These same three force any redistribution operator into a ratio form; the fourth selects the unique potential within this family, yielding USU. Controlled experiments on models with known attribution priors verify USU's formal guarantees; evaluation across ImageNet, CIFAR-10, and CUB-200 confirms consistent faithfulness improvements and qualitatively superior, semantically coherent explanations.