SaliencyDecor: Enhancing Neural Network Interpretability through Feature Decorrelation

arXiv cs.CV / 4/29/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Gradient-based saliency methods can yield noisy, unstable explanations because correlated feature dimensions blur attribution gradients across redundant directions.
  • The paper identifies feature correlation as a structural limitation of gradient-based interpretability and introduces SaliencyDecor to mitigate it.
  • SaliencyDecor trains models with a feature decorrelation regularizer alongside classification and prediction consistency under feature masking, improving attribution fidelity without changing the model architecture or the saliency method.
  • Experiments across multiple benchmarks and architectures show SaliencyDecor produces sharper, more object-focused saliency maps while also improving predictive accuracy, suggesting the usual trade-off between explanation quality and performance may be avoidable.

Abstract

Gradient-based saliency methods are widely used to interpret deep neural networks, yet they often produce noisy and unstable explanations that poorly align with semantically meaningful input features. We argue that a fundamental cause of this behavior lies in the geometry of learned representations: correlated feature dimensions diffuse attribution gradients across redundant directions, resulting in blurred and unreliable saliency maps. To address this issue, we identify feature correlation as a structural limitation of gradient-based interpretability and propose SaliencyDecor, a training framework that enforces feature decorrelation to improve attribution fidelity without modifying saliency methods or model architectures by reshaping the feature space toward orthogonality, our approach promotes more concentrated gradient flow and improves the fidelity of saliency-based explanations. SaliencyDecor jointly optimizes classification, prediction consistency under feature masking, and a decorrelation regularizer, requiring no architectural changes or inference-time overhead. Extensive experiments across multiple benchmarks and architectures demonstrate that our method produces substantially sharper and more object-focused saliency maps while simultaneously improving predictive performance, achieving accuracy gains across the datasets. These results establish our method as a principled mechanism for enhancing both interpretability and accuracy, challenging the conventional trade-off between explanation quality and model performance.