Causally Grounded Mechanistic Interpretability for LLMs with Faithful Natural-Language Explanations
arXiv cs.AI / 3/12/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The authors propose a pipeline that links circuit-level analysis to natural-language explanations by identifying causally important attention heads via activation patching, generating explanations with both template-based and LLM-based methods, and evaluating faithfulness with ERASER-style metrics adapted for circuit attribution.
- They evaluate on the Indirect Object Identification (IOI) task in GPT-2 Small, identifying six attention heads that account for 61.4% of the logit difference.
- Circuit-based explanations achieve 100% sufficiency but only 22% comprehensiveness, revealing distributed backup mechanisms across the model's heads.
- LLM-generated explanations outperform template baselines by 64% on quality metrics.
- They report no correlation between model confidence and explanation faithfulness and identify three failure categories where explanations diverge from the underlying mechanisms.
Related Articles
[R] Combining Identity Anchors + Permission Hierarchies achieves 100% refusal in abliterated LLMs — system prompt only, no fine-tuning
Reddit r/MachineLearning
How I Built an AI SDR Agent That Finds Leads and Writes Personalized Cold Emails
Dev.to
Complete Guide: How To Make Money With Ai
Dev.to
I Analyzed My Portfolio with AI and Scored 53/100 — Here's How I Fixed It to 85+
Dev.to
The Demethylation
Dev.to