19/08/2021

Safety Analysis of Deep Neural Networks

Dario Guidotti

Keywords: AI Ethics, Trust, Fairness, Trustable Learning, Validation and Verification, Deep Learning, Adversarial Machine Learning

Abstract: Deep Neural Networks (DNNs) are popular machine learning models which have found successful application in many different domains across computer science. Nevertheless, providing formal guarantees on the behaviour of neural networks is hard and therefore their reliability in safety-critical domains is still a concern. Verification and repair emerged as promising solutions to address this issue. In the following, I will present some of my recent efforts in this area.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at IJCAI 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers