12/07/2020

Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks

Ahmed T. Elthakeb, Prannoy Pilligundla, FatemehSadat Mireshghallah, Alexander Cloninger, Hadi Esmaeilzadeh

Keywords: Applications - Other

Abstract: The deep layers of modern neural networks extract a rather rich set of features as an input propagates through the network. This paper sets out to harvest these rich intermediate representations for quantization with minimal accuracy loss while significantly reducing the memory footprint and compute intensity of the DNN. This paper utilizes knowledge distillation through teacher-student paradigm (Hinton et al., 2015) in a novel setting that exploits the feature extraction capability of DNNs for higher-accuracy quantization. As such, our algorithm logically divides a pretrained full-precision DNN to multiple sections, each of which exposes intermediate features to train a team of students independently in the quantized domain. This divide and conquer strategy, in fact, makes the training of each student section possible in isolation while all these independently trained sections are later stitched together to form the equivalent fully quantized network. Our algorithm is a sectional approach towards knowledge distillation and is not treating the intermediate representation as a hint for pretraining before one knowledge distillation pass over the entire network (Romero et al., 2015). Experiments on various DNNs (AlexNet, LeNet, ResNet-18, ResNet-20, SVHN and VGG-11) show that, on average, this approach—called DCQ (Divide and Conquer Quantization)—on average closes the accuracy gap between a state-of-the-art quantized training technique, DoReFa-Net (Zhou et al., 2016) and the full-precision runs by 85% and 92% for binary and ternary quantization of the weights, respectively. Additionally, we show that our approach, DCQ, can improve performance of existing state-of-the art knowledge-distillation based approaches (Mishra et al., 2018) by 1.75% on average for both weight and activation quantization.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at ICML 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers