Mechanistically Interpreting Compression in Vision-Language Models

arXiv cs.AI / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how common compression methods for vision-language models—specifically pruning and quantization—alter internal computation and safety behavior.
  • Using causal circuit analysis and crosscoder-based feature comparisons, the authors find that pruning largely preserves circuit structure but rotates/attenuates internal features, while quantization changes circuits more broadly while keeping surviving features better aligned.
  • The work introduces VLMSafe-420, a benchmark that matches harmful inputs with benign counterfactuals across multiple safety categories to enable more controlled evaluation.
  • Results indicate that pruning can sharply reduce genuine refusal behavior, implying that compression choice has direct safety consequences for deployed VLMs.

Abstract

Compressed vision-language models (VLMs) are widely used to reduce memory and compute costs, making them a suitable choice for real-world deployment. However, compressing these models raises concerns about whether internal computations and safety behaviors are preserved. In this work, we use causal circuit analysis and crosscoder-based feature comparisons to examine how pruning and quantization fundamentally change the internals across representative VLMs. We observe that pruning generally keeps circuit structure intact but rotates and attenuates internal features, while quantization modifies the circuits at a higher level yet leaves the surviving features better aligned. Leveraging this insight, we also introduce VLMSafe-420, a novel benchmark that pairs harmful inputs with matched benign counterfactuals across various safety categories. Our findings show that pruning causes a sharp drop in genuine refusal behavior, suggesting that the choice of compression has safety implications.
広告