Unlearning the Unpromptable: Prompt-free Instance Unlearning in Diffusion Models
arXiv cs.LG / 3/12/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces prompt-free instance unlearning for diffusion models, aiming to forget undesired outputs that cannot be specified by text prompts, such as faces or culturally misinterpreted depictions.
- It proposes a surrogate-based unlearning method that combines image editing, timestep-aware weighting, and gradient surgery to guide models toward forgetting targeted outputs while preserving overall integrity.
- Experiments on conditional (Stable Diffusion 3) and unconditional (DDPM-CelebA) diffusion models demonstrate that the method uniquely unlearns unpromptable outputs and outperforms prompt-based and prompt-free baselines.
- The work suggests a practical hotfix approach for diffusion model providers to enhance privacy protection and ethical compliance.
Related Articles

Astral to Join OpenAI
Dev.to

PearlOS. We gave swarm intelligence a local desktop environment and code control to self-evolve. Has been pretty incredible to see so far. Open source and free if you want your own.
Reddit r/LocalLLaMA

Why Data is Important for LLM
Dev.to

The Inference Market Is Consolidating. Agent Payments Are Still Nobody's Problem.
Dev.to

YouTube's Deepfake Shield for Politicians Changes Evidence Forever
Dev.to