FB-NLL: A Feature-Based Approach to Tackle Noisy Labels in Personalized Federated Learning

arXiv cs.LG / 4/22/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • Personalized Federated Learning (PFL) often clusters users by relying on iterative learning dynamics, but this approach can be misled by noisy labels and low-quality data.
  • The proposed FB-NLL framework replaces dynamics-based clustering with a feature-centric, geometry-aware one-shot clustering that uses spectral properties of feature covariance and subspace similarity.
  • FB-NLL further introduces a label noise detection and correction mechanism inside clusters using feature-space directional alignment and class-specific feature subspace assignment, avoiding the need to estimate noise transition matrices.
  • The method is model-independent, can be combined with existing noise-robust training techniques, and is shown via extensive experiments to improve both average accuracy and performance stability across various datasets and noise conditions.

Abstract

Personalized Federated Learning (PFL) aims to learn multiple task-specific models rather than a single global model across heterogeneous data distributions. Existing PFL approaches typically rely on iterative optimization-such as model update trajectories-to cluster users that need to accomplish the same tasks together. However, these learning-dynamics-based methods are inherently vulnerable to low-quality data and noisy labels, as corrupted updates distort clustering decisions and degrade personalization performance. To tackle this, we propose FB-NLL, a feature-centric framework that decouples user clustering from iterative training dynamics. By exploiting the intrinsic heterogeneity of local feature spaces, FB-NLL characterizes each user through the spectral structure of the covariances of their feature representations and leverages subspace similarity to identify task-consistent user groupings. This geometry-aware clustering is label-agnostic and is performed in a one-shot manner prior to training, significantly reducing communication overhead and computational costs compared to iterative baselines. Complementing this, we introduce a feature-consistency-based detection and correction strategy to address noisy labels within clusters. By leveraging directional alignment in the learned feature space and assigning labels based on class-specific feature subspaces, our method mitigates corrupted supervision without requiring estimation of stochastic noise transition matrices. In addition, FB-NLL is model-independent and integrates seamlessly with existing noise-robust training techniques. Extensive experiments across diverse datasets and noise regimes demonstrate that our framework consistently outperforms state-of-the-art baselines in terms of average accuracy and performance stability.

FB-NLL: A Feature-Based Approach to Tackle Noisy Labels in Personalized Federated Learning | AI Navigate