Seeing Through Circuits: Faithful Mechanistic Interpretability for Vision Transformers

arXiv cs.AI / 4/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that mechanistic interpretability needs circuit-level transparency, not just neuron-level encoding, especially for vision transformers.
  • It proposes Automatic Visual Circuit Discovery (Vi-CD) to recover edge-based, class-specific computational circuits from vision transformer models.
  • The authors show Vi-CD can identify circuits related to typographic attacks in CLIP, improving understanding of how such attacks propagate through model components.
  • The work also finds circuits that support “steering” to mitigate or correct harmful model behavior, making the interpretability outputs more actionable.
  • Overall, the study demonstrates that meaningful edge-based mechanistic circuits can be extracted from vision transformers, increasing trust, safety, and model understanding.

Abstract

Transparency of neural networks' internal reasoning is at the heart of interpretability research, adding to trust, safety, and understanding of these models. The field of mechanistic interpretability has recently focused on studying task-specific computational graphs, defined by connections (edges) between model components. Such edge-based circuits have been defined in the context of large language models, yet vision-based approaches so far only consider neuron-based circuits. These tell which information is encoded, but not how it is routed through the complex wiring of a neural network. In this work, we investigate whether useful mechanistic circuits can be identified through computational graphs in vision transformers. We propose an effective method for Automatic Visual Circuit Discovery (Vi-CD) that recovers class-specific circuits for classification, identifies circuits underlying typographic attacks in CLIP, and discovers circuits that lend themselves for steering to correct harmful model behavior. Overall, we find that insightful and actionable edge-based circuits can be recovered from vision transformers, adding transparency to the internal computations of these models.