22/11/2021

PDF-Distil: including Prediction Disagreements in Feature-based Distillation for object detection

Heng ZHANG, Elisa Fromont, Sébastien Lefèvre, Bruno AVIGNON

Keywords: knowledge distillation: object detection

Abstract: Knowledge distillation aims at compressing deep models by transferring the learned knowledge from precise but cumbersome teacher models to compact student models. Due to the extreme imbalance between the foreground and the background of images, when traditional knowledge distillation methods are directly applied to the object detection task, there is a large performance gap between the teacher model and the student model. We tackle this imbalance problem from a sampling perspective, and we propose to include the teacher-student prediction disagreements into a feature-based knowledge distillation framework. This is done with PDF-Distil by dynamically generating a weighting mask applied to the knowledge distillation loss, based on the disagreements between the predictions of both models. Extensive experiments on PASCAL VOC and MS COCO datasets demonstrate that, compared to state-of-the-art methods, PDF-Distil is able to better reduce the performance gap between the teacher and student models.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at BMVC 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers