02/02/2021

How Does Data Augmentation Affect Privacy in Machine Learning?

Da Yu, Huishuai Zhang, Wei Chen, Jian Yin, Tie-Yan Liu

Keywords:

Abstract: It is observed in the literature that data augmentation can significantly mitigate membership inference (MI) attack. However, in this work, we challenge this observation by proposing new MI attacks to utilize the information of augmented data. MI attack is widely used to measure the model's information leakage of the training set. We establish the optimal membership inference when the model is trained with augmented data, which inspires us to formulate the MI attack as a set classification problem, i.e., classifying a set of augmented instances instead of a single data point, and design input permutation invariant features. Empirically, we demonstrate that the proposed approach universally outperforms original methods when the model is trained with data augmentation. Even further, we show that the proposed approach can achieve higher MI attack success rates on models trained with some data augmentation than the existing methods on models trained without data augmentation. Notably, we achieve a 70.1\% MI attack success rate on CIFAR10 against a wide residual network while the previous best approach only attains 61.9\%. This suggests the privacy risk of models trained with data augmentation could be largely underestimated.

The video of this talk cannot be embedded. You can watch it here:
https://slideslive.com/38947775
(Link will open in new window)
 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at AAAI 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers