22/11/2021

Bias Field Robustness Verification of Large Neural Image Classifiers

Patrick Henriksen, Kerstin Hammernik, Daniel Rueckert, Alessio Lomuscio

Keywords: Formal Verification, Neural Networks, Bias Fields, MRI, Robustness, Adversarial Examples

Abstract: We present a method for verifying the robustness of neural network-based image classifiers against a large class of intensity perturbations that frequently occur in computer vision. These perturbations, or intensity inhomogeneities, can be modelled by a spatially varying, multiplicative transformation of the intensities by a bias field. We illustrate an encoding of bias field transformations into neural network operations to exploit neural network formal verification toolkits. We extend the toolkit VeriNet with the above encoding, GPU support, input-domain splitting and a symbolic interval propagation pre-processing step. Finally, we show that the resulting implementation, VeriNetBF, can analyse models with up to 11M tuneable parameters and 6.5M ReLU nodes trained on the CIFAR-10 ImageNet and NYU fastMRI datasets.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at BMVC 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers