Less is More: Geometric Unlearning for LLMs with Minimal Data Disclosure

arXiv cs.CL / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the need for post-hoc unlearning in deployed LLMs to remove specific sensitive content while preserving overall usefulness.
  • It proposes Geometric Unlearning (GU), which performs alignment using prompt-time planning states rather than requiring access to the original training corpus.
  • GU distills a compact low-rank “geometry” of safe behavior from a small set of safe reference prompts and uses lightweight anchor-in-context synthetic prompts to localize alignment.
  • A teacher-distillation regularizer on synthetic non-target anchors is used to reduce collateral drift and protect non-target knowledge.
  • Experiments on privacy-focused benchmarks (ToFU and UnlearnPII) show strong suppression of targeted content with minimal degradation on non-target performance, using limited synthetic data.

Abstract

As large language models (LLMs) are increasingly deployed in real-world systems, they must support post-hoc removal of specific content to meet privacy and governance requirements. This motivates selective unlearning, which suppresses information about a particular entity or topic while preserving the LLM's general utility. However, most existing LLM unlearning methods require access to the original training corpus and rely on output-level refusal tuning or broad gradient updates, creating a tension among unlearning strength, non-target preservation, and data availability. We propose Geometric Unlearning (GU), an approach that operates directly on the model's prompt-time planning states without access to the original training corpus. GU distills a compact, low-rank geometry of desired safe behavior from a small set of safe reference prompts, and uses lightweight anchor-in-context synthetic prompts to trigger localized, projection-based alignment of hidden planning representations to this safe geometry. A teacher-distillation regularizer on synthetic non-target anchors further reduces collateral drift. Across privacy-oriented unlearning benchmarks (ToFU and UnlearnPII), GU achieves strong target suppression with minimal impact on non-target performance, demonstrating that effective unlearning can be achieved with minimal synthetic data.