Abstract:
Federated learning (FL) is a training paradigm where the clients collaboratively learn models by repeatedly sharing information without compromising much on the privacy of their local sensitive data. In this paper, we introduce <i>federated f-differential privacy</i>, a new notion specifically tailored to the federated setting, based on the framework of Gaussian differential privacy. Federated f-differential privacy operates on <i>record level</i>: it provides the privacy guarantee on each individual record of one client’s data against adversaries. We then propose a generic private federated learning framework that accommodates a large family of state-of-the-art FL algorithms, which provably achieves federated f-differential privacy. Finally, we empirically demonstrate the trade-off between privacy guarantee and prediction performance for models trained by in computer vision tasks.