AI Navigate

Unlearning for One-Step Generative Models via Unbalanced Optimal Transport

arXiv cs.CV / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces UOT-Unlearn, a plug-and-play unlearning framework for one-step generative models using Unbalanced Optimal Transport to forget a target class while preserving overall generation fidelity.
  • It treats unlearning as a trade-off between a forget cost that suppresses the forgotten class and an f-divergence penalty that relaxes marginal constraints to maintain quality.
  • By redistributing the forgotten class's probability mass to remaining classes via UOT, the method avoids producing low-quality or noise-likeSamples post-unlearning.
  • Experimental results on CIFAR-10 and ImageNet-256 demonstrate superior unlearning success (PUL) and retention quality (u-FID) compared with baselines.

Abstract

Recent advances in one-step generative frameworks, such as flow map models, have significantly improved the efficiency of image generation by learning direct noise-to-data mappings in a single forward pass. However, machine unlearning for ensuring the safety of these powerful generators remains entirely unexplored. Existing diffusion unlearning methods are inherently incompatible with these one-step models, as they rely on a multi-step iterative denoising process. In this work, we propose UOT-Unlearn, a novel plug-and-play class unlearning framework for one-step generative models based on the Unbalanced Optimal Transport (UOT). Our method formulates unlearning as a principled trade-off between a forget cost, which suppresses the target class, and an f-divergence penalty, which preserves overall generation fidelity via relaxed marginal constraints. By leveraging UOT, our method enables the probability mass of the forgotten class to be smoothly redistributed to the remaining classes, rather than collapsing into low-quality or noise-like samples. Experimental results on CIFAR-10 and ImageNet-256 demonstrate that our framework achieves superior unlearning success (PUL) and retention quality (u-FID), significantly outperforming baselines.