Attack by Unlearning: Unlearning-Induced Adversarial Attacks on Graph Neural Networks
arXiv cs.LG / 3/20/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces the concept of unlearning corruption attacks, showing how privacy-preserving unlearning can create a new attack surface for graph neural networks.
- It shows that an attacker can inject carefully chosen nodes during training and later trigger their deletion, causing accuracy degradation after unlearning while the model appears normal during training.
- The attack is formulated as a bi-level optimization using gradient-based updates and a surrogate model to generate pseudo-labels, enabling stealthy exploitation of the unlearning process.
- Extensive experiments across benchmarks and unlearning methods demonstrate that small, well-designed unlearning requests can cause significant accuracy drops, raising urgent concerns about robustness and regulatory compliance in real-world GNN systems, with source code to be released after acceptance.
Related Articles
GDPR and AI Training Data: What You Need to Know Before Training on Personal Data
Dev.to
Edge-to-Cloud Swarm Coordination for heritage language revitalization programs with embodied agent feedback loops
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
AI Crawler Management: The Definitive Guide to robots.txt for AI Bots
Dev.to
Data Sovereignty Rules and Enterprise AI
Dev.to