AI Navigate

When Differential Privacy Meets Wireless Federated Learning: An Improved Analysis for Privacy and Convergence

arXiv cs.LG / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper provides a comprehensive analysis of privacy loss and convergence for differential privacy in wireless federated learning with general smooth non-convex objectives.
  • It explicitly incorporates device selection and mini-batch sampling, showing that privacy loss can converge to a constant rather than diverge with the number of iterations.
  • The work establishes convergence guarantees with gradient clipping and derives an explicit privacy-utility trade-off.
  • Numerical results validate the theoretical findings and demonstrate practical implications for DPWFL deployments.

Abstract

Differentially private wireless federated learning (DPWFL) is a promising framework for protecting sensitive user data. However, foundational questions on how to precisely characterize privacy loss remain open, and existing work is further limited by convergence analyses that rely on restrictive convexity assumptions or ignore the effect of gradient clipping. To overcome these issues, we present a comprehensive analysis of privacy and convergence for DPWFL with general smooth non-convex loss objectives. Our analysis explicitly incorporates both device selection and mini-batch sampling, and shows that the privacy loss can converge to a constant rather than diverge with the number of iterations. Moreover, we establish convergence guarantees with gradient clipping and derive an explicit privacy-utility trade-off. Numerical results validate our theoretical findings.