SL-FAC: A Communication-Efficient Split Learning Framework with Frequency-Aware Compression

arXiv cs.LG / 4/9/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces SL-FAC, a split learning framework designed to reduce communication overhead when training large neural networks across resource-constrained edge devices and an edge server.
  • It improves on existing split learning by converting smashed activations/gradients into the frequency domain and performing adaptive frequency decomposition (AFD) to separate spectral components by information content.
  • It then applies frequency-based quantization compression (FQC), using customized quantization bit widths per spectral component according to their energy distributions to preserve convergence-critical information.
  • The authors report extensive experimental results showing that SL-FAC achieves substantial communication reduction while maintaining or improving training efficiency compared with prior approaches.

Abstract

The growing complexity of neural networks hinders the deployment of distributed machine learning on resource-constrained devices. Split learning (SL) offers a promising solution by partitioning the large model and offloading the primary training workload from edge devices to an edge server. However, the increasing number of participating devices and model complexity leads to significant communication overhead from the transmission of smashed data (e.g., activations and gradients), which constitutes a critical bottleneck for SL. To tackle this challenge, we propose SL-FAC, a communication-efficient SL framework comprising two key components: adaptive frequency decomposition (AFD) and frequency-based quantization compression (FQC). AFD first transforms the smashed data into the frequency domain and decomposes it into spectral components with distinct information. FQC then applies customized quantization bit widths to each component based on its spectral energy distribution. This collaborative approach enables SL-FAC to achieve significant communication reduction while strategically preserving the information most crucial for model convergence. Extensive experiments confirm the superior performance of SL-FAC for improving the training efficiency.