22/11/2021

Beyond Classification: Knowledge Distillation using Multi-Object Impressions

Gaurav Kumar Nayak, Monish K Keswani, Sharan Seshadri, Anirban Chakraborty

Keywords: Knowledge Distillation (KD), zero-shot, data-free, object detection, data privacy, multi-object impressions, pseudo-data, pseudo-targets, synthetic data, Faster RCNN

Abstract: Knowledge Distillation (KD) utilizes training data as a transfer set to transfer knowledge from a complex network (Teacher) to a smaller network (Student). Several works have recently identified many scenarios where the training data may not be available due to data privacy or sensitivity concerns and have proposed solutions under this restrictive constraint for the classification task. Unlike existing works, we, for the first time, solve a much more challenging problem, i.e., “KD for object detection with zero knowledge about the training data and its statistics”. Our proposed approach prepares pseudo-targets and synthesizes corresponding samples (termed as “Multi-Object Impressions”), using only the pretrained Faster RCNN Teacher network. We use this pseudo-dataset as a transfer set to conduct zero-shot KD for object detection. We demonstrate the efficacy of our proposed method through several ablations and extensive experiments on benchmark datasets like KITTI, Pascal and COCO. Our approach with no training samples, achieves a respectable mAP of 64.2% and 55.5% on the student with same and half capacity while performing distillation from a Resnet-18 Teacher of 73.3% mAP on KITTI.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at BMVC 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers