PrivUn: Unveiling Latent Ripple Effects and Shallow Forgetting in Privacy Unlearning

arXiv cs.LG / 4/27/2026

📰 NewsModels & Research

Key Points

  • PrivUn is proposed as a new evaluation framework to measure how robust privacy-focused machine unlearning is against multiple types of privacy attacks, including direct retrieval and recovery via in-context learning or fine-tuning.
  • The study finds that unlearning can produce “gradient-driven ripple effects,” meaning privacy removal may propagate through latent gradient-based associations rather than following conventional semantic/knowledge-graph relationships.
  • A major problem identified is “shallow forgetting,” where most existing methods fail to remove private information that is spread across many deep layers of the model.
  • Two validation strategies are explored—association-aware core-set selection using gradient similarity and multi-layer deep intervention via representational constraints—aiming to shift from shallow forgetting toward deep forgetting.
  • Overall, the paper suggests current privacy unlearning approaches are weaker than expected and provides tools and methods to evaluate and improve them more reliably.

Abstract

Large language models (LLMs) often memorize private information during training, raising serious privacy concerns. While machine unlearning has emerged as a promising solution, its true effectiveness against privacy attacks remains unclear. To address this, we propose PrivUn, a new evaluation framework that systematically assesses unlearning robustness through three-tier attack scenarios: direct retrieval, in-context learning recovery, and fine-tuning restoration; combined with quantitative analysis using forgetting scores, association metrics, and forgetting depth assessment. Our study exposes significant weaknesses in current unlearning methods, revealing two key findings: 1) unlearning exhibits gradient-driven ripple effects: unlike traditional forgetting which follows semantic relations (e.g., knowledge graphs), privacy unlearning propagates across latent gradient-based associations; and 2) most methods suffer from shallow forgetting, failing to remove private information distributed across multiple deep model layers. To validate these insights, we explore two strategies: association-aware core-set selection that leverages gradient similarity, and multi-layer deep intervention through representational constraints. These strategies represent a paradigm shift from shallow forgetting to deep forgetting.