MLFCIL: A Multi-Level Forgetting Mitigation Framework for Federated Class-Incremental Learning in LEO Satellites

arXiv cs.LG / 4/6/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses federated class-incremental learning for LEO satellite on-board computing, where new classes arrive over time under tight memory and communication limits.
  • It identifies three LEO-specific issues—orbital-dynamics-driven non-IID data heterogeneity, catastrophic forgetting amplified during aggregation, and constrained stability–plasticity trade-offs—and argues they occur at different stages/levels of the FCIL pipeline.
  • It proposes MLFCIL, a multi-level forgetting mitigation framework that uses class-reweighted loss, prototype-guided knowledge distillation with feature replay, and class-aware aggregation to preserve prior knowledge.
  • It further introduces a dual-granularity coordination strategy combining round-level adaptive loss balancing with step-level gradient projection to improve the stability–plasticity balance.
  • Experiments on the NWPU-RESISC45 dataset show MLFCIL improves accuracy and forgetting mitigation compared with baselines while adding minimal resource overhead.

Abstract

Low-Earth-orbit (LEO) satellite constellations are increasingly performing on-board computing. However, the continuous emergence of new classes under strict memory and communication constraints poses major challenges for collaborative training. Federated class-incremental learning (FCIL) enables distributed incremental learning without sharing raw data, but faces three LEO-specific challenges: non-independent and identically distributed data heterogeneity caused by orbital dynamics, amplified catastrophic forgetting during aggregation, and the need to balance stability and plasticity under limited resources. To tackle these challenges, we propose MLFCIL, a multi-level forgetting mitigation framework that decomposes catastrophic forgetting into three sources and addresses them at different levels: class-reweighted loss to reduce local bias, knowledge distillation with feature replay and prototype-guided drift compensation to preserve cross-task knowledge, and class-aware aggregation to mitigate forgetting during federation. In addition, we design a dual-granularity coordination strategy that combines round-level adaptive loss balancing with step-level gradient projection to further enhance the stability-plasticity trade-off. Experiments on the NWPU-RESISC45 dataset show that MLFCIL significantly outperforms baselines in both accuracy and forgetting mitigation, while introducing minimal resource overhead.