eduzhai > Applied Sciences > Engineering >

E2GC Energy-efficient Group Convolution in Deep Neural Networks

  • Save

... pages left unread,continue reading

Document pages: 6 pages

Abstract: The number of groups ($g$) in group convolution (GConv) is selected to boostthe predictive performance of deep neural networks (DNNs) in a compute andparameter efficient manner. However, we show that naive selection of $g$ inGConv creates an imbalance between the computational complexity and degree ofdata reuse, which leads to suboptimal energy efficiency in DNNs. We devise anoptimum group size model, which enables a balance between computational costand data movement cost, thus, optimize the energy-efficiency of DNNs. Based onthe insights from this model, we propose an "energy-efficient groupconvolution " (E2GC) module where, unlike the previous implementations of GConv,the group size ($G$) remains constant. Further, to demonstrate the efficacy ofthe E2GC module, we incorporate this module in the design of MobileNet-V1 andResNeXt-50 and perform experiments on two GPUs, P100 and P4000. We show that,at comparable computational complexity, DNNs with constant group size (E2GC)are more energy-efficient than DNNs with a fixed number of groups (F$g$GC). Forexample, on P100 GPU, the energy-efficiency of MobileNet-V1 and ResNeXt-50 isincreased by 10.8 and 4.73 (respectively) when E2GC modules substitute theF$g$GC modules in both the DNNs. Furthermore, through our extensiveexperimentation with ImageNet-1K and Food-101 image classification datasets, weshow that the E2GC module enables a trade-off between generalization abilityand representational power of DNN. Thus, the predictive performance of DNNs canbe optimized by selecting an appropriate $G$. The code and trained models areavailable at this https URL.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×