Language Models Can Explain Visual Features via Steering

arXiv cs.CV / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Sparse Autoencoders can discover many interpretable visual features, but generating explanations for them without manual human involvement has remained unresolved.
  • The paper proposes “Steering,” a causal-intervention method that uses Vision-Language Model structure to activate individual SAE features by steering the vision encoder with an empty image, then prompting the language model to describe the resulting visual concept.
  • The authors report that Steering provides a scalable way to explain vision-model features and complements explanation methods based on top activating input examples.
  • Explanation quality is shown to improve consistently as the language model scales, suggesting the approach benefits from larger LLMs.
  • They also introduce “Steering-informed Top-k,” a hybrid technique combining Steering with input-based methods to reach state-of-the-art explanation quality without additional computational cost.

Abstract

Sparse Autoencoders uncover thousands of features in vision models, yet explaining these features without requiring human intervention remains an open challenge. While previous work has proposed generating correlation-based explanations based on top activating input examples, we present a fundamentally different alternative based on causal interventions. We leverage the structure of Vision-Language Models and steer individual SAE features in the vision encoder after providing an empty image. Then, we prompt the language model to explain what it ``sees'', effectively eliciting the visual concept represented by each feature. Results show that Steering offers an scalable alternative that complements traditional approaches based on input examples, serving as a new axis for automated interpretability in vision models. Moreover, the quality of explanations improves consistently with the scale of the language model, highlighting our method as a promising direction for future research. Finally, we propose Steering-informed Top-k, a hybrid approach that combines the strengths of causal interventions and input-based approaches to achieve state-of-the-art explanation quality without additional computational cost.