Incentive-Aware Federated Averaging with Performance Guarantees under Strategic Participation
arXiv cs.LG / 2026/3/24
💬 オピニオンIdeas & Deep AnalysisModels & Research
要点
- The paper studies federated learning where participating clients act strategically to trade off learning benefits against the cost of sharing their data.
- It proposes an incentive-aware variant of Federated Averaging where each round’s clients send both updated model parameters and their dataset sizes, with dataset sizes updated via a Nash-equilibrium-seeking rule.
- The authors provide theoretical performance guarantees for the resulting algorithm under both convex and nonconvex global objective functions.
- Experiments on MNIST and CIFAR-10 show that the method can reach stable strategic participation patterns while maintaining competitive global model performance.

