FedACT: Concurrent Federated Intelligence across Heterogeneous Data Sources

arXiv cs.AI / 5/4/2026

💬 OpinionDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces FedACT, a scheduling method for federated learning systems that run multiple ML tasks concurrently on the same pool of heterogeneous devices.
  • FedACT uses an alignment scoring mechanism to match each device’s available resources with each job’s resource demands, aiming to improve overall training efficiency.
  • The approach explicitly incorporates participation fairness so devices contribute more evenly across concurrent FL jobs, boosting the quality of the resulting global models.
  • Experiments on diverse FL jobs and benchmark datasets show FedACT can cut average job completion time (JCT) by up to 8.3× and raise model accuracy by up to 44.5% versus state-of-the-art baselines.

Abstract

Federated Learning (FL) enables collaborative intelligence across decentralized data source devices in a privacy-preserving way. While substantial research attention has been drawn to optimizing the learning process for an individual task, real-world applications increasingly require multiple machine learning tasks simultaneously training their models across a shared pool of devices. Naively applying single-FL optimization techniques in multi-FL systems results in suboptimal system performance, particularly due to device heterogeneity and resource inefficiency. To address such a critical open challenge, we introduce {\em FedACT}, a novel resource heterogeneity-aware device scheduling approach designed to efficiently schedule heterogeneous devices across multiple concurrent FL jobs, with the goal of minimizing their average job completion time (JCT). {\em FedACT} dynamically assigns devices to FL jobs based on an alignment scoring mechanism that evaluates the compatibility between available resources of devices and resource demands of jobs. Additionally, it incorporates participation fairness to ensure balanced contributions from devices across jobs, further enhancing the accuracy levels of learned global models. An optimal scheduling plan is formulated in {\em FedACT} by prioritizing devices with higher alignment scores, while ensuring fair participation across jobs. To evaluate the effectiveness of the proposed scheduling algorithm, we carried out comprehensive experiments using diverse FL jobs and benchmark datasets. Experimental results demonstrate that {\em FedACT} reduces the average JCT by up to 8.3\(\times\) and improves model accuracy by up to 44.5\%, compared to the state-of-the-art baselines.