eduzhai > Applied Sciences > Engineering >

Channel Compression Rethinking Information Redundancy among Channels in CNN Architecture

  • king
  • (0) Download
  • 20210505
  • Save

... pages left unread,continue reading

Document pages: 9 pages

Abstract: Model compression and acceleration are attracting increasing attentions dueto the demand for embedded devices and mobile applications. Research onefficient convolutional neural networks (CNNs) aims at removing featureredundancy by decomposing or optimizing the convolutional calculation. In thiswork, feature redundancy is assumed to exist among channels in CNNarchitectures, which provides some leeway to boost calculation efficiency.Aiming at channel compression, a novel convolutional construction named compactconvolution is proposed to embrace the progress in spatial convolution, channelgrouping and pooling operation. Specifically, the depth-wise separableconvolution and the point-wise interchannel operation are utilized toefficiently extract features. Different from the existing channel compressionmethod which usually introduces considerable learnable weights, the proposedcompact convolution can reduce feature redundancy with no extra parameters.With the point-wise interchannel operation, compact convolutions implicitlysqueeze the channel dimension of feature maps. To explore the rules on reducingchannel redundancy in neural networks, the comparison is made among differentpoint-wise interchannel operations. Moreover, compact convolutions are extendedto tackle with multiple tasks, such as acoustic scene classification, soundevent detection and image classification. The extensive experiments demonstratethat our compact convolution not only exhibits high effectiveness in severalmultimedia tasks, but also can be efficiently implemented by benefiting fromparallel computation.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×