AI Navigate

Explainable LLM Unlearning Through Reasoning

arXiv cs.AI / 3/12/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that prior unlearning methods like gradient ascent are untargeted and can degrade general abilities or fail to fully remove knowledge, and it introduces a reasoning-based unlearning target to specify what should be forgotten and how the model should respond after unlearning.
  • It proposes targeted reasoning unlearning (TRU), which uses the reasoning-based target as guidance and combines a cross-entropy supervised loss with a GA-based loss to learn precise knowledge removal while preserving unrelated abilities.
  • The authors evaluate TRU across multiple benchmarks and LLM backbones, showing more reliable unlearning and preserved general capabilities, along with increased robustness under diverse attack scenarios.
  • They present reasoning-augmented unlearning as a practical, explainable paradigm for safe, reliable LLM unlearning, with implications for safety, copyright, and privacy concerns.

Abstract

LLM unlearning is essential for mitigating safety, copyright, and privacy concerns in pre-trained large language models (LLMs). Compared to preference alignment, it offers a more explicit way by removing undesirable knowledge characterized by specific unlearning datasets. In previous works, gradient ascent (GA) and its variants have shown promise for implementing unlearning, yet their untargeted nature results in unintended degradation of general capabilities, incomplete removal of knowledge, and the generation of incoherent responses, among many others. We argue that these issues stem from the absence of explicit guidance on what and how models should unlearn. To fill this gap, we introduce a novel unlearning target, reasoning-based unlearning target, which satisfies both the specified unlearning scope and the specified post-unlearning response. Building on this, we propose targeted reasoning unlearning (TRU), which leverages reasoning-based unlearning target as guidance. We employ the target using a cross-entropy supervised loss combined with a GA-based loss, enabling the model to learn reasoning ability for precise knowledge removal while preserving unrelated abilities. We evaluate TRU against strong baselines across multiple benchmarks and LLM backbones, and find that it achieves more reliable unlearning while preserving general capabilities. Moreover, TRU exhibits superior robustness under diverse attack scenarios, stemming from the reasoning ability learned through reasoning-based targets. Overall, our study establishes reasoning-augmented unlearning as a practical paradigm for reliable and explainable LLM unlearning.