05/04/2021

Rethinking Floating Point Overheads for Mixed Precision DNN Accelerators

Hamzah Abdelaziz, ali shafiee, Jong Hoon Shin, Ardavan Pedram, Joseph Hassoun

Keywords:

Abstract: Mixed precision DNN accelerators become more ubiquitous especially when both efficient training and inference are required. In this paper, we propose a mixed-precision convolution unit architecture which supports different integer and floating point~(FP) precisions. The proposed architecture is based on low-bit inner product units and realizes higher precision based on temporal decomposition. We illustrate how to integrate FP computations on integer-based architecture and evaluate overheads incurred by FP arithmetic support. We argue that alignment and addition overhead for FP inner product can be significant since the maximum exponent difference could be up to 58 bits, which results into a large alignment logic. To address this issue, we illustrate empirically that at least 8 bits of alignment logic are required to maintain inference accuracy. We present novel optimizations based on the above observations to reduce the FP arithmetic hardware overheads. Our empirical results, based on simulation and hardware implementation, show significant reduction in FP16 overhead. Over typical mixed precision implementation, the proposed architecture achieves area improvements of up to 25\% in TFLOPS/$mm^2$ and up to 46\% in TOPS/$mm^2$ with power efficiency improvements of up to 40\% in TFLOPS/W and up to 63\% in TOPS/W.

The video of this talk cannot be embedded. You can watch it here:
https://slideslive.com/38952742
(Link will open in new window)
 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at MLSYS 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers