AI Navigate

Causally Grounded Mechanistic Interpretability for LLMs with Faithful Natural-Language Explanations

arXiv cs.AI / 3/12/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The authors propose a pipeline that links circuit-level analysis to natural-language explanations by identifying causally important attention heads via activation patching, generating explanations with both template-based and LLM-based methods, and evaluating faithfulness with ERASER-style metrics adapted for circuit attribution.
  • They evaluate on the Indirect Object Identification (IOI) task in GPT-2 Small, identifying six attention heads that account for 61.4% of the logit difference.
  • Circuit-based explanations achieve 100% sufficiency but only 22% comprehensiveness, revealing distributed backup mechanisms across the model's heads.
  • LLM-generated explanations outperform template baselines by 64% on quality metrics.
  • They report no correlation between model confidence and explanation faithfulness and identify three failure categories where explanations diverge from the underlying mechanisms.

Abstract

Mechanistic interpretability identifies internal circuits responsible for model behaviors, yet translating these findings into human-understandable explanations remains an open problem. We present a pipeline that bridges circuit-level analysis and natural language explanations by (i) identifying causally important attention heads via activation patching, (ii) generating explanations using both template-based and LLM-based methods, and (iii) evaluating faithfulness using ERASER-style metrics adapted for circuit-level attribution. We evaluate on the Indirect Object Identification (IOI) task in GPT-2 Small (124M parameters), identifying six attention heads accounting for 61.4% of the logit difference. Our circuit-based explanations achieve 100% sufficiency but only 22% comprehensiveness, revealing distributed backup mechanisms. LLM-generated explanations outperform template baselines by 64% on quality metrics. We find no correlation (r = 0.009) between model confidence and explanation faithfulness, and identify three failure categories explaining when explanations diverge from mechanisms.