Projected Gradient Unlearning for Text-to-Image Diffusion Models: Defending Against Concept Revival Attacks

arXiv cs.CV / 4/24/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies machine unlearning for text-to-image diffusion models, focusing on removing undesirable concepts while avoiding full retraining.
  • It identifies a key weakness of existing unlearning methods: erased concepts can “revive” after the model is fine-tuned on downstream data, even if that data is unrelated.
  • The authors adapt Projected Gradient Unlearning (PGU) to the diffusion setting by building a Core Gradient Space (CGS) from retain concept activations and projecting gradient updates to prevent the erasure from being undone.
  • When used on top of existing unlearning techniques (ESD, UCE, Receler), PGU removes style-concept revival and substantially delays object-concept revival, taking about 6 minutes versus ~2 hours for Meta-Unlearning.
  • The work suggests PGU and Meta-Unlearning are complementary, and it recommends choosing retain concepts based on visual feature similarity rather than semantic grouping.

Abstract

Machine unlearning for text-to-image diffusion models aims to selectively remove undesirable concepts from pre-trained models without costly retraining. Current unlearning methods share a common weakness: erased concepts return when the model is fine-tuned on downstream data, even when that data is entirely unrelated. We adapt Projected Gradient Unlearning (PGU) from classification to the diffusion domain as a post-hoc hardening step. By constructing a Core Gradient Space (CGS) from the retain concept activations and projecting gradient updates into its orthogonal complement, PGU ensures that subsequent fine-tuning cannot undo the achieved erasure. Applied on top of existing methods (ESD, UCE, Receler), the approach eliminates revival for style concepts and substantially delays it for object concepts, running in roughly 6 minutes versus the ~2 hours required by Meta-Unlearning. PGU and Meta-Unlearning turn out to be complementary: which performs better depends on how the concept is encoded, and retain concept selection should follow visual feature similarity rather than semantic grouping.