12/07/2020

Instance-hiding Schemes for Private Distributed Learning

Yangsibo Huang, Zhao Song, Sanjeev Arora, Kai Li

Keywords: Privacy-preserving Statistics and Machine Learning

Abstract: An important problem today is how to allow a group of decentralized entities to compute on their private data on a centralized deep net while protecting data privacy. Classic cryptographic techniques are too inefficient, so other methods have recently been suggested, e.g., differentially private Federated Learning. Here, a new method is introduced, inspired by the classic notion of {\em instance hiding} in cryptography. It uses the Mixup technique, proposed by {Zhang et al, ICLR 2018} as a way to improve generalization and robustness. Usual mixup involves training on nonnegative combinations of inputs. The new ideas in the current paper are: (a) new variants of mixup with negative as well as positive coefficients, and extend the sample-wise mixup to be pixel-wise. (b) Experiments demonstrating the effectiveness of this in protecting privacy against known attacks while preserving utility. (c) Theoretical analysis suggesting why this method is effective, using ideas from analyses of attacks. (d) Estimates of security and the release of a challenge dataset to allow the design of attack schemes.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at ICML 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers