Understanding and Defending VLM Jailbreaks via Jailbreak-Related Representation Shift
arXiv cs.CV / 3/19/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- VLM safety alignment weakens when the visual modality is added, with image prompts increasing jailbreak success even for harmful intents.
- Benign and harmful inputs are separable in the model's representation space, and jailbreak samples form a distinct internal state separate from refusals.
- The authors define a jailbreak direction and a jailbreak-related shift (JRS) as the component of the image-induced representation shift along that direction, unifying diverse jailbreak behaviors.
- They propose a defense method, JRS-Rem, that removes the jailbreak-related shift at inference to improve safety while preserving performance on benign tasks.
Related Articles
[R] Combining Identity Anchors + Permission Hierarchies achieves 100% refusal in abliterated LLMs — system prompt only, no fine-tuning
Reddit r/MachineLearning
How I Built an AI SDR Agent That Finds Leads and Writes Personalized Cold Emails
Dev.to
Complete Guide: How To Make Money With Ai
Dev.to
I Analyzed My Portfolio with AI and Scored 53/100 — Here's How I Fixed It to 85+
Dev.to
The Demethylation
Dev.to