Diagnosing and Repairing Unsafe Channels in Vision-Language Models via Causal Discovery and Dual-Modal Safety Subspace Projection
arXiv cs.CV / 3/31/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes CARE, a framework to diagnose and repair unsafe internal pathways in large vision-language models by using causal mediation analysis to pinpoint neurons and layers responsible for unsafe behaviors.
- It introduces a dual-modal safety subspace projection approach that learns generalized safety subspaces for both visual and textual modalities using generalized eigen-decomposition between benign and malicious activations.
- During inference, CARE applies dynamic projection with a hybrid fusion mechanism to balance visual and textual corrections, suppressing unsafe features while maintaining semantic fidelity.
- Experiments on multiple safety benchmarks show improved safety robustness compared with prior activation-steering and alignment-based baselines, with no loss in general multimodal capabilities.
- The method is reported to transfer well to unseen attacks, indicating stronger generalization beyond the evaluated adversarial settings.
Related Articles
Why AI agent teams are just hoping their agents behave
Dev.to

Harness as Code: Treating AI Workflows Like Infrastructure
Dev.to

How to Make Claude Code Better at One-Shotting Implementations
Towards Data Science

The Crypto AI Agent Stack That Costs $0/Month to Run
Dev.to

Bag of Freebies for Training Object Detection Neural Networks
Dev.to