AI Navigate

A Concept is More Than a Word: Diversified Unlearning in Text-to-Image Diffusion Models

arXiv cs.AI / 3/20/2026

📰 NewsModels & Research

Key Points

  • The paper proposes Diversified Unlearning, a distributional approach that represents a concept with multiple contextually diverse prompts rather than a single keyword to erase from text-to-image diffusion models.
  • It highlights limitations of keyword-based unlearning due to the multidimensional nature of concepts and latent-space entanglements, which can lead to incomplete erasure and over-forgetting.
  • The method can be used as an add-on component to existing unlearning pipelines, achieving stronger erasure, better retention of unrelated concepts, and robustness against adversarial recovery attacks.
  • Experimental results across benchmarks and state-of-the-art baselines demonstrate improved erasure and robustness, suggesting practical safety benefits for model deployment.

Abstract

Concept unlearning has emerged as a promising direction for reducing the risks of harmful content generation in text-to-image diffusion models by selectively erasing undesirable concepts from a model's parameters. Existing approaches typically rely on keywords to identify the target concept to be unlearned. However, we show that this keyword-based formulation is inherently limited: a visual concept is multi-dimensional, can be expressed in diverse textual forms, and often overlap with related concepts in the latent space, making keyword-only unlearning, which imprecisely indicate the target concept is brittle and prone to over-forgetting. This occurs because a single keyword represents only a narrow point estimate of the concept, failing to cover its full semantic distribution and entangled variations in the latent space. To address this limitation, we propose Diversified Unlearning, a distributional framework that represents a concept through a set of contextually diverse prompts rather than a single keyword. This richer representation enables more precise and robust unlearning. Through extensive experiments across multiple benchmarks and state-of-the-art baselines, we demonstrate that integrating Diversified Unlearning as an add-on component into existing unlearning pipelines consistently achieves stronger erasure, better retention of unrelated concepts, and improved robustness against adversarial recovery attacks.