Relationship-Aware Safety Unlearning for Multimodal LLMs
arXiv cs.AI / 3/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Generative multimodal models can exhibit safety failures that are inherently relational, making two benign concepts unsafe when linked by a specific action or relation.
- The paper proposes relationship-aware safety unlearning, which explicitly represents unsafe object-relation-object (O-R-O) tuples and applies targeted parameter-efficient edits (LoRA) to suppress unsafe tuples while preserving object marginals and safe neighboring relations.
- The authors validate the approach with CLIP-based experiments and assess robustness under paraphrase, contextual, and out-of-distribution image attacks.
- By focusing on relational safety instead of isolated concepts, the method aims to reduce collateral damage from unlearning and improve safety without harming benign capabilities.
Related Articles

Astral to Join OpenAI
Dev.to

PearlOS. We gave swarm intelligence a local desktop environment and code control to self-evolve. Has been pretty incredible to see so far. Open source and free if you want your own.
Reddit r/LocalLLaMA

Why Data is Important for LLM
Dev.to

The Inference Market Is Consolidating. Agent Payments Are Still Nobody's Problem.
Dev.to

YouTube's Deepfake Shield for Politicians Changes Evidence Forever
Dev.to