H-Sets: Hessian-Guided Discovery of Set-Level Feature Interactions in Image Classifiers
arXiv cs.AI / 4/27/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that current feature attribution methods mostly capture marginal effects and miss higher-order feature interactions, which are crucial for interpretability in image classifiers.
- It introduces H-Sets, a two-stage approach that first uses input Hessians to find locally interacting feature pairs and then recursively merges them into semantically coherent feature sets, using Segment Anything (SAM) segmentation as a spatial grouping prior.
- It also proposes IDG-Vis, a set-level extension of Integrated Directional Gradients that traces directional gradients along pixel-space paths and aggregates contributions using Harsanyi dividends to attribute each discovered set.
- Although the Hessian-based detection adds extra computation, experiments on VGG, ResNet, DenseNet, and MobileNet across ImageNet and CUB show that H-Sets produces sparser and more faithful saliency maps than prior interaction attribution methods.
Related Articles

Legal Insight Transformation: 7 Mistakes to Avoid When Adopting AI Tools
Dev.to

Legal Insight Transformation: Traditional vs. AI-Driven Research Compared
Dev.to

Legal Insight Transformation: A Beginner's Guide to Modern Research
Dev.to
I tested the same prompt across multiple AI models… the differences surprised me
Reddit r/artificial
The five loops between AI coding and AI engineering
Dev.to