22/11/2021

Rethinking Clustering for Robustness

Motasem Alfarra, Juan C Perez (Universidad de los Andes, King Abdullah University of Science and Technology), Adel Bibi, Ali K Thabet, Pablo Arbelaez, Bernard Ghanem

Keywords: adversarial robustness, clustering, metric learning

Abstract: This paper studies how encouraging semantically-aligned features during deep neural network training can increase network robustness. Recent works observed that Adversarial Training leads to robust models, whose learnt features appear to correlate with human perception. Inspired by this connection from robustness to semantics, we study the complementary connection: from semantics to robustness. To do so, we provide a robustness certificate for distance-based classification models (clustering-based classifiers). Moreover, we show that this certificate is tight, and we leverage it to propose emph{ClusTR} (Clustering Training for Robustness), a clustering-based and adversary-free training framework to learn robust models. Interestingly, textit{ClusTR} outperforms adversarially-trained networks by up to $4%$ under strong PGD attacks.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at BMVC 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers