FedIDM: Achieving Fast and Stable Convergence in Byzantine Federated Learning through Iterative Distribution Matching

arXiv cs.LG / 4/17/2026

📰 NewsModels & Research

Key Points

  • The paper argues that many Byzantine-robust federated learning methods converge slowly and unstably, and often lose model utility under high proportions of colluding malicious clients.
  • It proposes FedIDM, a Byzantine-robust FL approach that uses distribution matching to create trustworthy condensed data for identifying and filtering abnormal clients.
  • FedIDM includes two key components: attack-tolerant condensed data generation and a robust aggregation scheme with negative contribution-based rejection.
  • Experimental results on three benchmark datasets show FedIDM delivers fast, stable convergence while preserving acceptable utility across multiple state-of-the-art Byzantine attack settings with many malicious clients.

Abstract

Most existing Byzantine-robust federated learning (FL) methods suffer from slow and unstable convergence. Moreover, when handling a substantial proportion of colluded malicious clients, achieving robustness typically entails compromising model utility. To address these issues, this work introduces FedIDM, which employs distribution matching to construct trustworthy condensed data for identifying and filtering abnormal clients. FedIDM consists of two main components: (1) attack-tolerant condensed data generation, and (2) robust aggregation with negative contribution-based rejection. These components exclude local updates that (1) deviate from the update direction derived from condensed data, or (2) cause a significant loss on the condensed dataset. Comprehensive evaluations on three benchmark datasets demonstrate that FedIDM achieves fast and stable convergence while maintaining acceptable model utility, under multiple state-of-the-art Byzantine attacks involving a large number of malicious clients.