PECKER: A Precisely Efficient Critical Knowledge Erasure Recipe For Machine Unlearning in Diffusion Models

arXiv cs.AI / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines why existing machine unlearning (MU) methods in diffusion/GenAI systems can be inefficient, attributing the issue to poorly directed gradient updates that slow training and can destabilize convergence.
  • It introduces PECKER, an efficient MU approach that uses a distillation framework and a saliency mask to focus parameter updates on those most responsible for forgetting the targeted data.
  • PECKER is reported to match or outperform prevailing MU methods while reducing unnecessary gradient computation and shortening overall unlearning training time.
  • Experiments indicate faster unlearning of related classes or concepts and improved alignment with the true image distributions on CIFAR-10 and STL-10.
  • The results cover both “class forgetting” and “concept forgetting,” with shorter training times asserted for each task without loss in unlearning effectiveness.

Abstract

Machine unlearning (MU) has become a critical technique for GenAI models' safe and compliant operation. While existing MU methods are effective, most impose prohibitive training time and computational overhead. Our analysis suggests the root cause lies in poorly directed gradient updates, which reduce training efficiency and destabilize convergence. To mitigate these issues, we propose PECKER, an efficient MU approach that matches or outperforms prevailing methods. Within a distillation framework, PECKER introduces a saliency mask to prioritize updates to parameters that contribute most to forgetting the targeted data, thereby reducing unnecessary gradient computation and shortening overall training time without sacrificing unlearning efficacy. Our method generates samples that unlearn related class or concept more quickly, while closely aligning with the true image distribution on CIFAR-10 and STL-10 datasets, achieving shorter training times for both class forgetting and concept forgetting.