Incentive-Aware Federated Averaging with Performance Guarantees under Strategic Participation

arXiv cs.LG / 2026/3/24

💬 オピニオンIdeas & Deep AnalysisModels & Research

要点

  • The paper studies federated learning where participating clients act strategically to trade off learning benefits against the cost of sharing their data.
  • It proposes an incentive-aware variant of Federated Averaging where each round’s clients send both updated model parameters and their dataset sizes, with dataset sizes updated via a Nash-equilibrium-seeking rule.
  • The authors provide theoretical performance guarantees for the resulting algorithm under both convex and nonconvex global objective functions.
  • Experiments on MNIST and CIFAR-10 show that the method can reach stable strategic participation patterns while maintaining competitive global model performance.

Abstract

Federated learning (FL) is a communication-efficient collaborative learning framework that enables model training across multiple agents with private local datasets. While the benefits of FL in improving global model performance are well established, individual agents may behave strategically, balancing the learning payoff against the cost of contributing their local data. Motivated by the need for FL frameworks that successfully retain participating agents, we propose an incentive-aware federated averaging method in which, at each communication round, clients transmit both their local model parameters and their updated training dataset sizes to the server. The dataset sizes are dynamically adjusted via a Nash equilibrium (NE)-seeking update rule that captures strategic data participation. We analyze the proposed method under convex and nonconvex global objective settings and establish performance guarantees for the resulting incentive-aware FL algorithm. Numerical experiments on the MNIST and CIFAR-10 datasets demonstrate that agents achieve competitive global model performance while converging to stable data participation strategies.