AI Navigate

FAME: Formal Abstract Minimal Explanation for Neural Networks

arXiv cs.AI / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • FAME (Formal Abstract Minimal Explanations) is proposed as a new class of abductive explanations for neural networks, grounded in abstract interpretation.
  • It scales to large networks and reduces explanation size by using dedicated perturbation domains that eliminate the need for traversal order, with progressively shrinking domains and LiRPA-based bounds to discard irrelevant features.
  • The method converges to a formal abstract minimal explanation and introduces a procedure to measure the worst-case distance between an abstract minimal explanation and a true minimal explanation, combining adversarial attacks with an optional VERIX+ refinement step.
  • Empirical benchmarks show consistent gains in explanation size and runtime on medium- to large-scale networks compared to VERIX+.
  • The work contributes to explainable AI by providing a scalable, formal framework for generating minimal explanations for neural networks.

Abstract

We propose FAME (Formal Abstract Minimal Explanations), a new class of abductive explanations grounded in abstract interpretation. FAME is the first method to scale to large neural networks while reducing explanation size. Our main contribution is the design of dedicated perturbation domains that eliminate the need for traversal order. FAME progressively shrinks these domains and leverages LiRPA-based bounds to discard irrelevant features, ultimately converging to a formal abstract minimal explanation. To assess explanation quality, we introduce a procedure that measures the worst-case distance between an abstract minimal explanation and a true minimal explanation. This procedure combines adversarial attacks with an optional VERIX+ refinement step. We benchmark FAME against VERIX+ and demonstrate consistent gains in both explanation size and runtime on medium- to large-scale neural networks.