AI Navigate

Relationship-Aware Safety Unlearning for Multimodal LLMs

arXiv cs.AI / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Generative multimodal models can exhibit safety failures that are inherently relational, making two benign concepts unsafe when linked by a specific action or relation.
  • The paper proposes relationship-aware safety unlearning, which explicitly represents unsafe object-relation-object (O-R-O) tuples and applies targeted parameter-efficient edits (LoRA) to suppress unsafe tuples while preserving object marginals and safe neighboring relations.
  • The authors validate the approach with CLIP-based experiments and assess robustness under paraphrase, contextual, and out-of-distribution image attacks.
  • By focusing on relational safety instead of isolated concepts, the method aims to reduce collateral damage from unlearning and improve safety without harming benign capabilities.

Abstract

Generative multimodal models can exhibit safety failures that are inherently relational: two benign concepts can become unsafe when linked by a specific action or relation (e.g., child-drinking-wine). Existing unlearning and concept-erasure approaches often target isolated concepts or image-text pairs, which can cause collateral damage to benign uses of the same objects and relations. We propose relationship-aware safety unlearning: a framework that explicitly represents unsafe object-relation-object (O-R-O) tuples and applies targeted parameter-efficient edits (LoRA) to suppress unsafe tuples while preserving object marginals and safe neighboring relations. We include CLIP-based experiments and robustness evaluation under paraphrase, contextual, and out-of-distribution image attacks.