eduzhai > Applied Sciences > Engineering >

The Effects of Approximate Multiplication on Convolutional Neural Networks

  • king
  • (0) Download
  • 20210506
  • Save

... pages left unread,continue reading

Document pages: 13 pages

Abstract: This paper analyzes the effects of approximate multiplication when performinginferences on deep convolutional neural networks (CNNs). The approximatemultiplication can reduce the cost of the underlying circuits so that CNNinferences can be performed more efficiently in hardware accelerators. Thestudy identifies the critical factors in the convolution, fully-connected, andbatch normalization layers that allow more accurate CNN predictions despite theerrors from approximate multiplication. The same factors also provide anarithmetic explanation of why bfloat16 multiplication performs well on CNNs.The experiments are performed with recognized network architectures to showthat the approximate multipliers can produce predictions that are nearly asaccurate as the FP32 references, without additional training. For example, theResNet and Inception-v4 models with Mitch-$w$6 multiplication produces Top-5errors that are within 0.2 compared to the FP32 references. A brief costcomparison of Mitch-$w$6 against bfloat16 is presented, where a MAC operationsaves up to 80 of energy compared to the bfloat16 arithmetic. The mostfar-reaching contribution of this paper is the analytical justification thatmultiplications can be approximated while additions need to be exact in CNN MACoperations.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×