From Local to Global to Mechanistic: An iERF-Centered Unified Framework for Interpreting Vision Models

arXiv cs.CV / 5/4/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes an iERF-centered interpretability framework that unifies local, global, and mechanistic explanations of vision models using a single analysis unit: the pointwise feature vector (PFV) plus its instance-specific effective receptive field (iERF).
  • It introduces Sharing Ratio Decomposition (SRD) to express each PFV as a mixture of upstream PFVs, propagating iERFs to produce activation-faithful, class-discriminative saliency maps that are robust to manipulations and noise.
  • For global interpretability, it presents Concept-Anchored Feature Explanation (CAFE), using the iERF to semantically label latent vectors and ground sparse autoencoder features in verifiable pixel-level evidence.
  • To explain how concepts are composed across network depth, it proposes the Interlayer Concept Graph with Interlayer Concept Attribution (ICAT), and uses an interlayer insertion/deletion protocol to identify Integrated Gradients as the most faithful attribution instantiation.
  • Experiments across ResNet50, VGG16, and Vision Transformers show improved fidelity and robustness over baselines, including for dispersed SAE features, and the framework highlights dominant concept routes in correct, incorrect, and adversarial cases.

Abstract

Modern vision models achieve remarkable accuracy, but explaining where evidence arises, what the model encodes, and how internal computations assemble that evidence remains fragmented. We introduce an iERF-centric framework that unifies local, global, and mechanistic interpretability around a single analysis unit: the pointwise feature vector (PFV) paired with its instance-specific Effective Receptive Field (iERF). On the local side, Sharing Ratio Decomposition (SRD) expresses each PFV as a mixture of upstream PFVs via sharing ratios and propagates iERFs to construct class-discriminative saliency maps. SRD yields high-resolution, activation-faithful explanations, is robust to targeted manipulation and noise, and remains activation-agnostic across common nonlinearities. For the global view, we introduce Concept-Anchored Feature Explanation (CAFE), which utilizes the iERF as a semantic label, grounding abstract latent vectors in verifiable pixel-level evidence. With CAFE, we address the challenge of non-localized sparse autoencoder latents--especially in Transformers, where early self-attention mixes distant context. To answer how representations are composed through depth, we propose the Interlayer Concept Graph with Interlayer Concept Attribution (ICAT), which quantifies concept-to-concept influence while isolating layer pairs; an interlayer insertion, deletion protocol identifies Integrated Gradients as the most faithful instantiation. Empirically, across ResNet50, VGG16, and ViTs, our framework outperforms baselines in both fidelity and robustness, successfully interprets dispersed SAE features, and exposes dominant concept routes in correct, misclassified, and adversarial cases. Grounded in iERFs, our approach provides a coherent, evidence-backed map from pixels to concepts to decisions.