Stable Multimodal Graph Unlearning via Feature-Dimension Aware Quantile Selection

arXiv cs.LG / 5/6/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that existing graph unlearning methods often use uniform parameter selection/editing across GNN layers, which can be especially damaging for multimodal graphs with high-dimensional cross-modal projections.
  • It introduces FDQ (Feature-Dimension Aware Quantile), which detects the layers tied to high-dimensional input projections and applies more conservative, quantile-threshold-based suppression there.
  • FDQ keeps the core importance-estimation mechanism unchanged, but integrates with diagonal sensitivity-based parameter importance analysis to support efficient node and edge unlearning.
  • Experiments on Ele-Fashion and Goodreads-NC show FDQ preserves utility more effectively while still achieving effective forgetting, including resistance to membership inference attacks.
  • The work positions FDQ as a principled, robust approach for privacy-aware unlearning in high-dimensional multimodal graph learning settings.

Abstract

Graph unlearning remains a critical technique for supporting privacy-preserving and sustainable multimodal graph learning. However, we observe that existing unlearning strategies tend to apply uniform parameter selection and editing across all graph neural network (GNN) layers, which is especially harmful for multimodal graphs where high-dimensional input projections encode dominant cross-modal knowledge. As a result, over-editing these sensitive layers often leads to catastrophic utility degradation after forgetting, undermining both stable learning and effective privacy protection. To address this gap, we propose FDQ, a Feature-Dimension Aware Quantile framework for multimodal graph unlearning. FDQ adaptively identifies high-dimensional input projection layers and applies more conservative, FDQ-guided quantile thresholds when constructing suppression sets, while keeping the underlying importance estimation mechanism unchanged. FDQ is seamlessly integrated with diagonal sensitivity-based parameter importance analysis to enable efficient node and edge unlearning under general forget requests. Through extensive experiments on Ele-Fashion and Goodreads-NC, we demonstrate that FDQ consistently achieves strong utility preservation while maintaining effective forgetting against membership inference attacks. Overall, FDQ offers a principled and robust solution for privacy-aware unlearning in high-dimensional multimodal graph systems.