Automated Interpretability and Feature Discovery in Language Models with Agents

arXiv cs.CL / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes an autonomous multi-agent framework for mechanistic interpretability that both generates explanations and discovers internal features in large language models.
  • It uses two coupled feedback loops: one for refining explanation hypotheses via targeted prompt controls and multi-metric evaluation, and another for discovering features by building an activation-space k-nearest-neighbor graph and filtering candidates with statistical separability and semantic coherence.
  • Experiments on the Gemma-2 model family and on MLP neurons in weight-sparse transformer variants show improved results over one-shot automated interpretability methods.
  • The approach aims to produce auditable, falsifiable explanation traces and can uncover language-specific and safety-relevant internal features.

Abstract

We introduce an autonomous multiagent framework for mechanistic interpretability that automates both explaining and finding internal features in large language models. The system runs two coupled loops: (1) explanation refinement, where an agent proposes competing hypotheses and iteratively tests them with targeted prompt controls and a multi-metric evaluation; and (2) feature discovery, where an agent generates prompt sets, constructs a k-nearest-neighbor graph in activation space, and retrieves candidate features using statistical separability and semantic coherence criteria. On Gemma-2 family models and MLP neurons in weight-sparse transformers, our agent improves over one-shot auto-interpretations, discovers language-specific and safety-relevant features, and produces auditable explanation traces, showing that agent-driven empirical loops yield sharper and more falsifiable explanations than one-shot labels.