AI Navigate

Pitfalls in Evaluating Interpretability Agents

arXiv cs.AI / 3/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates how to evaluate automated interpretability agents, focusing on LLM-driven systems that explain model components during circuit analysis tasks.
  • It builds an agentic research system where the agent iteratively designs experiments and refines hypotheses, and compares its explanations to six human expert explanations.
  • The study reveals replication-based evaluation pitfalls, including subjectivity and incompleteness of human explanations, and the risk that LLMs memorize or guess published findings.
  • It proposes an unsupervised intrinsic evaluation framework based on the functional interchangeability of model components to better assess interpretability systems.
  • The work highlights fundamental challenges in evaluating complex automated interpretability and questions the reliability of traditional replication-based methods.

Abstract

Automated interpretability systems aim to reduce the need for human labor and scale analysis to increasingly large models and diverse tasks. Recent efforts toward this goal leverage large language models (LLMs) at increasing levels of autonomy, ranging from fixed one-shot workflows to fully autonomous interpretability agents. This shift creates a corresponding need to scale evaluation approaches to keep pace with both the volume and complexity of generated explanations. We investigate this challenge in the context of automated circuit analysis -- explaining the roles of model components when performing specific tasks. To this end, we build an agentic system in which a research agent iteratively designs experiments and refines hypotheses. When evaluated against human expert explanations across six circuit analysis tasks in the literature, the system appears competitive. However, closer examination reveals several pitfalls of replication-based evaluation: human expert explanations can be subjective or incomplete, outcome-based comparisons obscure the research process, and LLM-based systems may reproduce published findings via memorization or informed guessing. To address some of these pitfalls, we propose an unsupervised intrinsic evaluation based on the functional interchangeability of model components. Our work demonstrates fundamental challenges in evaluating complex automated interpretability systems and reveals key limitations of replication-based evaluation.