12/07/2020

Stronger and Faster Wasserstein Adversarial Attacks

Kaiwen Wu, Allen Wang, Yaoliang Yu

Keywords: Adversarial Examples

Abstract: Deep models, while being extremely flexible and accurate, are surprisingly vulnerable to ``small, imperceptible'' perturbations known as adversarial attacks. While the majority of existing attacks focuses on measuring perturbations under the $\ell_p$ metric, Wasserstein distance, which takes geometry in pixel space into account, has long known to be a better metric for measuring image quality and has recently risen as a compelling alternative to the $\ell_p$ metric in adversarial attacks. However, constructing an effective attack under the Wasserstein metric is computationally much more challenging and calls for better optimization algorithms. We address this gap in two ways: (a) we develop an exact yet efficient projection operator to enable a stronger projected gradient attack; (b) we show for the first time that conditional gradient method equipped with a suitable linear minimization oracle works extremely fast under Wasserstein constraints. Our algorithms not only converge faster but also generate much stronger attacks. For instance, we decrease the accuracy of a residual network on CIFAR-10 to less than $30\%$ within a Wasserstein perturbation ball of radius $0.005$, in contrast to $65.2\%$ using the previous state-of-the-art attack based on approximate projection.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at ICML 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers