Lifecycle-Aware Federated Continual Learning in Mobile Autonomous Systems

arXiv cs.LG / 4/23/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces a lifecycle-aware dual-timescale federated continual learning (FCL) framework for distributed autonomous fleets that must adapt over long missions.
  • It addresses key limitations of prior work by using layer-selective protection to handle different forgetting sensitivities across network layers and by explicitly managing both short-term forgetting and long-term cumulative drift.
  • The proposed method combines a layer-selective rehearsal strategy for training-time (pre-forgetting) stability with a rapid post-forgetting knowledge recovery strategy to restore performance after long-term degradation.
  • The authors provide theoretical analysis of heterogeneous forgetting dynamics and argue that long-term degradation is inevitable, motivating the need for recovery mechanisms.
  • Experiments report up to an 8.3% mIoU improvement over the strongest federated baseline and up to 31.7% over conventional fine-tuning, with additional validation via deployment on a real rover testbed under realistic constraints.

Abstract

Federated continual learning (FCL) allows distributed autonomous fleets to adapt collaboratively to evolving terrain types across extended mission lifecycles. However, current approaches face several key challenges: 1) they use uniform protection strategies that do not account for the varying sensitivities to forgetting on different network layers; 2) they focus primarily on preventing forgetting during training, without addressing the long-term effects of cumulative drift; and 3) they often depend on idealized simulations that fail to capture the real-world heterogeneity present in distributed fleets. In this paper, we propose a lifecycle-aware dual-timescale FCL framework that incorporates training-time (pre-forgetting) prevention and (post-forgetting) recovery. Under this framework, we design a layer-selective rehearsal strategy that mitigates immediate forgetting during local training, and a rapid knowledge recovery strategy that restores degraded models after long-term cumulative drift. We present a theoretical analysis that characterizes heterogeneous forgetting dynamics and establishes the inevitability of long-term degradation. Our experimental results show that this framework achieves up to 8.3\% mIoU improvement over the strongest federated baseline and up to 31.7\% over conventional fine-tuning. We also deploy the FCL framework on a real-world rover testbed to assess system-level robustness under realistic constraints; the testing results further confirm the effectiveness of our FCL design.