Sample Selection Using Multi-Task Autoencoders in Federated Learning with Non-IID Data

arXiv cs.LG / 4/30/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes federated learning sample-selection techniques to reduce the impact of redundant, malicious, abnormal, and noisy training samples that can degrade accuracy and efficiency.
  • It introduces a multi-task autoencoder framework that estimates each image sample’s contribution using loss and feature analysis, coupled with unsupervised outlier detection methods.
  • Clients use central-server-managed filtering based on OCSVM, isolation forest (IF), and an adaptive loss threshold (AT), while feature-based selection is further improved via a centrally controlled multi-class deep SVDD loss.
  • Experiments on CIFAR-10 and MNIST under varying client counts, non-IID data distributions, and noise levels up to 40% show accuracy gains of up to 7.02% (CIFAR-10) and 1.83% (MNIST) using loss-based selection, plus up to 0.99% improvement on CIFAR-10 from federated SVDD loss.
  • Overall, the results indicate the proposed methods are effective and robust across different federated settings and noise conditions.
  • categories": ["models-research", "dev-stack-infra", "ideas-deep-analysis"]

Abstract

Federated learning is a machine learning paradigm in which multiple devices collaboratively train a model under the supervision of a central server while ensuring data privacy. However, its performance is often hindered by redundant, malicious, or abnormal samples, leading to model degradation and inefficiency. To overcome these issues, we propose novel sample selection methods for image classification, employing a multitask autoencoder to estimate sample contributions through loss and feature analysis. Our approach incorporates unsupervised outlier detection, using one-class support vector machine (OCSVM), isolation forest (IF), and adaptive loss threshold (AT) methods managed by a central server to filter noisy samples on clients. We also propose a multi-class deep support vector data description (SVDD) loss controlled by a central server to enhance feature-based sample selection. We validate our methods on CIFAR10 and MNIST datasets across varying numbers of clients, non-IID distributions, and noise levels up to 40%. The results show significant accuracy improvements with loss-based sample selection, achieving gains of up to 7.02% on CIFAR10 with OCSVM and 1.83% on MNIST with AT. Additionally, our federated SVDD loss further improves feature-based sample selection, yielding accuracy gains of up to 0.99% on CIFAR10 with OCSVM. These results show the effectiveness of our methods in improving model accuracy across various client counts and noise conditions.