AI Navigate

LLM Unlearning with LLM Beliefs

arXiv cs.CL / 3/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • Large language models trained on large corpora risk memorizing sensitive content, and traditional unlearning methods based on gradient ascent can redistribute probability mass to semantically related rephrasings, a phenomenon the authors call the squeezing effect.
  • The paper introduces a bootstrapping framework that uses the model's own high-confidence beliefs to counter squeezing, combining BS-T (token-level) and BS-S (sequence-level) objectives to suppress both target responses and model beliefs.
  • By jointly suppressing target outputs and high-probability beliefs, the BS approach aims for more thorough forgetting while preserving model utility.
  • Empirical results across diverse benchmarks and model families demonstrate the effectiveness of BS-T and BS-S in reducing retention of sensitive content.

Abstract

Large language models trained on vast corpora inherently risk memorizing sensitive or harmful content, which may later resurface in their outputs. Prevailing unlearning methods generally rely on gradient ascent and its variants to lower the probability of specific target responses. However, we find that this strategy induces a critical side effect: probability mass is redistributed into high-likelihood regions, often corresponding to semantically related rephrasings of the targets. We refer to this as the squeezing effect, which explains why many methods yield merely spurious unlearning, a problem further obscured by automated metrics (e.g., ROUGE, truth ratio) that misreport actual success. To address this, we propose a bootstrapping (BS) framework that explicitly links the squeezing effect with the model's own high-confidence generations, namely its model beliefs. Since model beliefs inherently capture the very high-likelihood regions where probability mass is squeezed, incorporating them into the unlearning objective directly counters the squeezing effect. By jointly suppressing both target responses and model beliefs, BS-T (token) attenuates high-probability tokens, whereas BS-S (sequence) removes entire high-confidence generations, together achieving more thorough forgetting while preserving utility. Extensive experiments across diverse benchmarks with various model families confirm the effectiveness of our approach.