Self-Supervised Federated Learning under Data Heterogeneity for Label-Scarce Diatom Classification

arXiv cs.CV / 4/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies self-supervised federated learning (SSFL) for diatom image classification under decentralized, heterogeneous, and label-scarce conditions where sites only partially overlap in their class sets.
  • It shows that prior SSFL research often assumes the same type of data heterogeneity across both pre-training and fine-tuning, and it explicitly analyzes heterogeneity separately by training stage.
  • The authors examine cross-site variation in unlabeled data volume during pre-training and label-space misalignment during fine-tuning, finding that unlabeled-volume heterogeneity improves representation learning while label-space heterogeneity is driven primarily by class prevalence.
  • To enable controlled simulation of real-world label-space heterogeneity, they introduce PreDi, which disentangles label-space differences into two orthogonal dimensions: class prevalence and class-set size disparity.
  • Based on these findings, they propose PreP-WFL (Prevalence-based Personalized Weighted Federated Learning) to boost rare-class representations, reporting consistent SSFL gains over local-only training and larger improvements as class prevalence decreases.

Abstract

Label-scarce visual classification under decentralized and heterogeneous data is a fundamental challenge in pattern recognition, especially when sites exhibit partially overlapping class sets. While self-supervised federated learning (SSFL) offers a promising solution, existing studies commonly assume the same data heterogeneity pattern throughout pre-training and fine-tuning. Moreover, current partitioning schemes often fail to generate pure partially class-disjoint data settings, limiting controllable simulation of real-world label-space heterogeneity. In this work, we introduce SSFL for diatom classification as a representative real-world instance and systematically investigate stage-specific data heterogeneity. We study cross-site variation in unlabeled data volume during pre-training and label-space misalignment during downstream fine-tuning. To study the latter in a controllable setting, we propose PreDi, a partitioning scheme that disentangles label-space heterogeneity into two orthogonal dimensions, namely class Prevalence and class-set size Disparity, enabling separate analysis of their effects. Guided by the resulting insights, we further propose PreP-WFL (Prevalence-based Personalized Weighted Federated Learning) to adaptively strengthen rare-class representations in low-prevalence scenarios. Extensive experiments show that SSFL consistently outperforms local-only training under both homogeneous and heterogeneous settings. The pronounced heterogeneity in unlabeled data volume is associated with improved representation pre-training, whereas under label-space heterogeneity, prevalence dominates performance and disparity has a smaller effect. PreP-WFL effectively mitigates this degradation, with gains increasing as prevalence decreases. These findings provide a mechanistic basis for characterizing label-space heterogeneity in decentralized recognition systems.

Self-Supervised Federated Learning under Data Heterogeneity for Label-Scarce Diatom Classification | AI Navigate