02/02/2021

PID-Based Approach to Adversarial Attacks

Chen Wan, Biaohua Ye, Fangjun Huang

Keywords:

Abstract: Adversarial attack can misguide the deep neural networks (DNNs) with adding small-magnitude perturbations to normal examples, which is mainly determined by the gradient of the loss function with respect to inputs. Previously, various strategies have been proposed to enhance the performance of adversarial attacks. However, all these methods only utilize the gradients in the present and past to generate adversarial examples. Until now, the trend of gradient change in the future (i.e., the derivative of gradient) has not been considered yet. Inspired by the classic proportional-integral-derivative (PID) controller in the field of automatic control, we propose a new PID-based approach for generating adversarial examples. The gradients in the present and past, and the derivative of gradient are considered in our method, which correspond to the components of P, I and D in the PID controller, respectively. Extensive experiments consistently demonstrate that our method can achieve higher attack success rates and exhibit better transferability compared with the state-of-the-art gradient-based adversarial attacks. Furthermore, our method possesses good extensibility and can be applied to almost all available gradient-based adversarial attacks.

The video of this talk cannot be embedded. You can watch it here:
https://slideslive.com/38949057
(Link will open in new window)
 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at AAAI 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers