WIN-U: Woodbury-Informed Newton-Unlearning as a retain-free Machine Unlearning Framework

arXiv cs.LG / 4/16/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • WIN-U is proposed as a retain-free machine unlearning framework for enforcing “right to be forgotten” in trained models, removing the influence of a designated forget set without needing retained training data.
  • The method relies only on second-order information from the originally trained model and applies a single Newton-style update, using the Woodbury matrix identity and a generalized Gauss-Newton approximation to handle forget-set curvature.
  • WIN-U is designed to approximate the gold-standard retraining optimum (training on only the retain set) via a local second-order expansion, while avoiding the data-access requirements of many existing unlearning approaches.
  • Experiments across multiple vision and language benchmarks report state-of-the-art unlearning effectiveness and strong utility preservation, along with improved robustness against relearning attacks compared to prior methods.

Abstract

Privacy concerns in LLMs have led to the rapidly growing need to enforce a data's "right to be forgotten". Machine unlearning addresses precisely this task, namely the removal of the influence of some specific data, i.e., the forget set, from a trained model. The gold standard for unlearning is to produce the model that would have been learned on only the rest of the training data, i.e., the retain set. Most existing unlearning methods rely on direct access to the retained data, which may not be practical due to privacy or cost constraints. We propose WIN-U, a retained-data free unlearning framework that requires only second order information for the originally trained model on the full data. The unlearning is performed using a single Newton-style step. Using the Woodbury matrix identity and a generalized Gauss-Newton approximation for the forget set curvature, the WIN-U update recovers the closed-form linear solution and serves as a local second-order approximation to the gold-standard retraining optimum. Extensive experiments on various vision and language benchmarks demonstrate that WIN-U achieves SOTA performance in terms of unlearning efficacy and utility preservation, while being more robust against relearning attacks compared to existing methods. Importantly, WIN-U does not require access to the retained data.