Abstract:
This paper proposes to use intrinsic examples as a DNN fingerprinting technique for the functionality verification of DNN models implemented on edge devices. The proposed intrinsic examples do not affect the normal DNN training and can enable the black-box testing capability for DNN models packaged into edge device applications. We provide three algorithms for deriving intrinsic examples of the pre-trained model (the model before the DNN system design and implementation procedure) to retrieve the knowledge learnt from the training dataset for the detection of adversarial third-party attacks such as transfer learning and fault injection attack that may happen during the system implementation procedure. Besides, they can accommodate the model transformations due to various DNN model compression methods used by the system designer.