Abstract
Differential privacy (DP) is obtained by randomizing a data analysis algorithm, which necessarily introduces a tradeoff between its utility and privacy. Many DP mechanisms are built upon one of two underlying tools: Laplace and Gaussian additive noise mechanisms. We expand the search space of algorithms by investigating the Generalized Gaussian (GG) mechanism, which samples the additive noise term x with probability proportional to e^{-\frac{| x |}{\sigma}^{\beta} } for some \beta \geq 1 (denoted GG_{\beta, \sigma}(f,D)). The Laplace and Gaussian mechanisms are special cases of GG for \beta=1 and \beta=2, respectively.
We prove that the full GG family satisfies differential privacy and extend the PRV accountant to support privacy loss computation for these mechanisms. We then instantiate the GG mechanism in two canonical private learning pipelines, PATE and DP-SGD. Empirically, we explore PATE and DP-SGD with the GG mechanism across the computationally feasible values of \beta: \beta \in [1,2] for DP-SGD and \beta \in [1,4] for PATE. For both mechanisms, we find that \beta=2 (Gaussian) performs as well as or better than other values in their computational tractable domains.This provides justification for the widespread adoption of the Gaussian mechanism in DP learning.