FedOBP: Federated Optimal Brain Personalization through Cloud-Edge Element-wise Decoupling

arXiv cs.LG / 4/21/2026

💬 OpinionDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper addresses limitations of federated learning on heterogeneous client data and resource-constrained mobile devices by focusing on Personalized Federated Learning (PFL) via model decoupling.
  • It introduces FedOBP, which uses a quantile-based thresholding mechanism and an element-wise importance score to decide which parameters should be personalized versus shared globally.
  • The method extends Optimal Brain Damage (OBD) pruning theory by using a federated approximation of the first-order derivative in a Taylor expansion to estimate each parameter’s sensitivity to local loss landscapes.
  • FedOBP shifts the metric/importance computation from clients to the server to reduce computation burden on mobile devices.
  • Experiments on multiple datasets and heterogeneity settings show FedOBP achieves better performance than existing approaches while requiring personalization of only a small fraction of parameters.

Abstract

Federated Learning (FL) faces challenges from client data heterogeneity and resource-constrained mobile devices, which can degrade model accuracy. Personalized Federated Learning (PFL) addresses this issue by adapting shared global knowledge to local data distributions. A promising approach in PFL is model decoupling, which separates the model into global and personalized parameters, raising the key question of which parameters should be personalized to balance global knowledge sharing and local adaptation. In this paper, we propose a Federated Optimal Brain Personalization (FedOBP) algorithm with a quantile-based thresholding mechanism and introduce an element-wise importance score. This score extends Optimal Brain Damage (OBD) pruning theory by incorporating a federated approximation of the first-order derivative in the Taylor expansion to evaluate the importance of each parameter for personalization. Moreover, we move the metric computation originally performed on clients to the server side, to alleviate the burden on resource-constrained mobile devices. To the best of our knowledge, this is the first work to bridge classical saliency-based pruning theory with federated parameter decoupling, providing a rigorous theoretical justification for selecting personalized parameters based on their sensitivity to local loss landscapes. Extensive experiments demonstrate that FedOBP outperforms state-of-the-art methods across diverse datasets and heterogeneity scenarios, while requiring personalization of only a very small number of personalized parameters.