Diagnosing and Repairing Unsafe Channels in Vision-Language Models via Causal Discovery and Dual-Modal Safety Subspace Projection

arXiv cs.CV / 3/31/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes CARE, a framework to diagnose and repair unsafe internal pathways in large vision-language models by using causal mediation analysis to pinpoint neurons and layers responsible for unsafe behaviors.
  • It introduces a dual-modal safety subspace projection approach that learns generalized safety subspaces for both visual and textual modalities using generalized eigen-decomposition between benign and malicious activations.
  • During inference, CARE applies dynamic projection with a hybrid fusion mechanism to balance visual and textual corrections, suppressing unsafe features while maintaining semantic fidelity.
  • Experiments on multiple safety benchmarks show improved safety robustness compared with prior activation-steering and alignment-based baselines, with no loss in general multimodal capabilities.
  • The method is reported to transfer well to unseen attacks, indicating stronger generalization beyond the evaluated adversarial settings.

Abstract

Large Vision-Language Models (LVLMs) have achieved impressive performance across multimodal understanding and reasoning tasks, yet their internal safety mechanisms remain opaque and poorly controlled. In this work, we present a comprehensive framework for diagnosing and repairing unsafe channels within LVLMs (CARE). We first perform causal mediation analysis to identify neurons and layers that are causally responsible for unsafe behaviors. Based on these findings, we introduce a dual-modal safety subspace projection method that learns generalized safety subspaces for both visual and textual modalities through generalized eigen-decomposition between benign and malicious activations. During inference, activations are dynamically projected toward these safety subspaces via a hybrid fusion mechanism that adaptively balances visual and textual corrections, effectively suppressing unsafe features while preserving semantic fidelity. Extensive experiments on multiple safety benchmarks demonstrate that our causal-subspace repair framework significantly enhances safety robustness without degrading general multimodal capabilities, outperforming prior activation steering and alignment-based baselines. Additionally, our method exhibits good transferability, defending against unseen attacks.