CiPO: Counterfactual Unlearning for Large Reasoning Models through Iterative Preference Optimization

arXiv cs.CL / 4/20/2026

📰 NewsModels & Research

Key Points

  • The paper addresses a key challenge in machine unlearning for Large Reasoning Models (LRMs) that rely on long chain-of-thought (CoT) reasoning, where existing methods can either fail to remove unwanted knowledge or harm reasoning performance.
  • It introduces CiPO (Counterfactual Unlearning through iterative Preference Optimization), which reframes unlearning as a targeted intervention in CoT by generating counterfactual reasoning traces tied to a target “unlearning answer.”
  • CiPO uses iterative preference tuning: as the LRM learns from counterfactual traces, the framework updates preference data to increase divergence from the original model.
  • Experiments on difficult benchmarks indicate CiPO can remove the targeted knowledge from both intermediate CoT steps and final answers while largely preserving the model’s reasoning abilities.
  • Overall, the work claims an approach that resolves the trade-off (“dilemma”) between complete unlearning and maintaining reasoning quality through an iterative optimization loop.

Abstract

Machine unlearning has gained increasing attention in recent years, as a promising technique to selectively remove unwanted privacy or copyrighted information from Large Language Models that are trained on a massive scale of human data. However, the emergence of Large Reasoning Models (LRMs), which emphasize long chain-of-thought (CoT) reasoning to address complex questions, presents a dilemma to unlearning: existing methods either struggle to completely eliminate undesired knowledge from the CoT traces or degrade the reasoning performances due to the interference with the reasoning process. To this end, we introduce Counterfactual Unlearning through iterative Preference Optimization (CiPO), a novel framework that redefines unlearning as the targeted intervention of the CoT reasoning in LRMs. More specifically, given a desired unlearning target answer, CiPO instructs LRMs to generate a logically valid counterfactual reasoning trace for preference tuning. As the LRM adjusts to the counterfactual trace, CiPO iteratively updates the preference learning data to increase the discrepancy from the original model. This iterative loop ensures both desirable unlearning and smooth optimization, effectively mitigating the dilemma. Experiments on challenging benchmarks demonstrate that CiPO excels at unlearning, completely removing knowledge from both the intermediate CoT steps and the final answer, while preserving the reasoning abilities of LRMs.