Graph Propagated Projection Unlearning: A Unified Framework for Vision and Audio Discriminative Models

arXiv cs.AI / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Graph-Propagated Projection Unlearning (GPPU), a unified class-level machine unlearning method that works across both vision and audio discriminative models.
  • GPPU uses graph-based feature-space propagation to find class-specific directions, then projects representations onto an orthogonal subspace and applies targeted fine-tuning to remove the target class information.
  • Experiments across six vision datasets and two large-scale audio benchmarks (covering CNNs, Vision Transformers, and Audio Transformers) show efficient unlearning performance.
  • The authors report 10–20× speedups over prior unlearning approaches while maintaining utility on non-target (retained) classes.
  • The work frames GPPU as a principled, modality-agnostic approach to responsible deep learning, with evaluations at a scale the authors say has been less explored previously.

Abstract

The need to selectively and efficiently erase learned information from deep neural networks is becoming increasingly important for privacy, regulatory compliance, and adaptive system design. We introduce Graph-Propagated Projection Unlearning (GPPU), a unified and scalable algorithm for class-level unlearning that operates across both vision and audio models. GPPU employs graph-based propagation to identify class-specific directions in the feature space and projects representations onto the orthogonal subspace, followed by targeted fine-tuning, to ensure that target class information is effectively and irreversibly removed. Through comprehensive evaluations on six vision datasets and two large-scale audio benchmarks spanning a variety of architectures including CNNs, Vision Transformers, and Audio Transformers, we demonstrate that GPPU achieves highly efficient unlearning, realizing 10-20x speedups over prior methodologies while preserving model utility on retained classes. Our framework provides a principled and modality-agnostic approach to machine unlearning, evaluated at a scale that has received limited attention in prior work, contributing toward more efficient and responsible deep learning.