Bucketing the Good Apples: A Method for Diagnosing and Improving Causal Abstraction

arXiv cs.AI / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a diagnostic method for neural-network interpretation that finds an input subspace where a proposed interpretation is especially faithful.
  • It improves causal-abstraction evaluation by replacing one global “interchange intervention accuracy” score with region-based bucketing into well-interpreted and under-interpreted areas using pairwise behavior.
  • The approach makes causal abstraction more actionable by showing not only whether an interpretation works, but also where it succeeds or fails and what differentiates those cases.
  • The authors provide practical heuristics to improve interpretations, including identifying missing distinctions in high-level hypotheses, discovering unmodeled intermediate variables, and combining partial interpretations.
  • A four-step recipe is demonstrated via informative error analyses in multiple settings, including a toy logic task where recursive application recovers a high-level hypothesis from scratch.

Abstract

We present a method for diagnosing interpretation in neural networks by identifying an input subspace where a proposed interpretation is highly faithful. Our method is particularly useful for causal-abstraction-style interpretability, where a high-level causal hypothesis is evaluated by interchange interventions. Rather than treating interchange intervention accuracy as a single global summary, we refine this framework by partitioning the input space into well-interpreted and under-interpreted regions according to pairwise interchange-intervention behavior. This turns causal abstraction from a purely global evaluation into a more diagnostic tool: it not only measures whether an interpretation works, but also reveals where it works, where it fails, and what distinguishes the two cases. This diagnostic view also provides practical heuristics for improving interpretations. By analyzing the structure of the well-interpreted and under-interpreted regions, we can identify missing distinctions in a high-level hypothesis, discover previously unmodeled intermediate variables, and combine complementary partial interpretations into a stronger one. We instantiate this idea as a simple four-step recipe and show that it yields informative error analyses across multiple causal abstraction settings. In a toy logic task, recursively applying the recipe recovers a high-level hypothesis from scratch. More broadly, our results suggest that partitioning the input space is a useful step toward more precise, constructive, and scalable mechanistic interpretability.