$\oslash$ Source Models Leak What They Shouldn't $\nrightarrow$: Unlearning Zero-Shot Transfer in Domain Adaptation Through Adversarial Optimization
arXiv cs.CV / 4/10/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper highlights a privacy risk in source-free domain adaptation, where a source-trained vision model can unintentionally leak source-domain–exclusive class knowledge into a target domain without seeing source data.
- Experiments show existing SFDA approaches can achieve strong zero-shot performance on source-exclusive classes present only in the target domain labels, indicating inadvert information transfer.
- The authors introduce SCADA-UL (Unlearning Source-exclusive ClAsses in Domain Adaptation) to address this setting, arguing that prior machine unlearning methods don’t properly handle distribution shifts.
- SCADA-UL uses adversarially generated “forget class” samples, combined with a rescaled labeling strategy and adversarial optimization, to unlearn those classes during adaptation.
- The work includes continual and partially-unknown forget-class variants and reports that SCADA-UL reaches retraining-level unlearning performance while outperforming baselines, with code released on GitHub.
Related Articles

GLM 5.1 tops the code arena rankings for open models
Reddit r/LocalLLaMA
can we talk about how AI has gotten really good at lying to you?
Reddit r/artificial

AI just found thousands of zero-days. Your firewall is still pattern-matching from 2014
Dev.to

Emergency Room and the Vanishing Moat
Dev.to

I Built a 100% Browser-Based OCR That Never Uploads Your Documents — Here's How
Dev.to