12/07/2020

Adversarial Robustness for Code

Pavol Bielik, Martin Vechev

Keywords: Adversarial Examples

Abstract: Machine learning and deep learning in particular has been recently used to successfully address many tasks in the domain of code including -- finding and fixing bugs, code completion, decompilation, malware detection, type inference and many others. However, the issue of adversarial robustness of models for code has gone largely unnoticed. In this work we address this gap by: (i) developing adversarial attacks for code (a domain with discrete and highly structured inputs), (ii) showing that, similar to other domains, neural models for code are highly vulnerable to adversarial attacks, and (iii) developing a set of novel techniques that enable training robust and accurate models of code.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at ICML 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers