Scalable Cross-Facility Federated Learning for Scientific Foundation Models on Multiple Supercomputers

arXiv cs.LG / 3/23/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The authors present a cross-facility federated learning framework for heterogeneous HPC environments, built on APPFL with Globus Compute and Transfer orchestration to enable training across multiple DOE leadership-class supercomputers.
  • They characterize sources of heterogeneity that affect training performance under realistic HPC scheduling and show that algorithmic choices significantly influence outcomes.
  • They validate the approach by fine-tuning a large language model on a chemistry instruction dataset, demonstrating practical scientific applicability.
  • They identify scheduler-aware algorithm design as a critical open challenge for future cross-facility deployments.

Abstract

Artificial Intelligence for scientific applications increasingly requires training large models on data that cannot be centralized due to privacy constraints, data sovereignty, or the sheer volume of data generated. Federated learning (FL) addresses this by enabling collaborative training without centralizing raw data, but scientific applications demand model scales that requires extensive computing resources, typically offered at High Performance Computing (HPC) facilities. Deploying FL experiments across HPC facilities introduces challenges beyond cloud or enterprise settings. We present a comprehensive cross-facility FL framework for heterogeneous HPC environments, built on Advanced Privacy-Preserving Federated Learning (APPFL) framework with Globus Compute and Transfer orchestration, and evaluate it across four U.S. Department of Energy (DOE) leadership-class supercomputers. We demonstrate that FL experiments across HPC facilities are practically achievable, characterize key sources of heterogeneity impacting the training performance, and show that algorithmic choices matter significantly under realistic HPC scheduling conditions. We validate the scientific applicability by fine-tuning a large language model on a chemistry instruction dataset, and identify scheduler-aware algorithm design as a critical open challenge for future deployments.