Computation and Communication Efficient Federated Unlearning via On-server Gradient Conflict Mitigation and Expression
arXiv cs.LG / 3/17/2026
📰 NewsDeveloper Stack & InfrastructureModels & Research
Key Points
- The paper proposes Federated On-server Unlearning (FOUL), a two-stage framework consisting of a learning-to-unlearn phase and an on-server knowledge aggregation phase to remove a forget client's data without accessing client data.
- It introduces a new data setting for Federated Unlearning and a time-to-forget metric to quantify how quickly unlearning achieves optimal performance.
- Experimental results on three datasets show FOUL outperforms retraining in Federated Unlearning, with competitive or superior unlearning efficiency and significantly reduced time-to-forget, communication, and computation costs.
- By enabling privacy-preserving and efficient unlearning, FOUL aims to address cross-client knowledge leakage and regulatory requirements in federated learning.
Related Articles

I let an AI agent loose on my codebase. It tried to read my .env file in 30 seconds.
Dev.to
How I Taught an AI Agent to Save Its Own Progress
Dev.to

Chip Smuggling Arrests, OpenClaw Is 'The Next ChatGPT,' and 81K People on AI
Dev.to
The Lemma
Dev.to
Your Agent Will Eventually Do Something Catastrophic. Here's How to Prevent It.
Dev.to