eduzhai > Applied Sciences > Engineering >

Standing on the Shoulders of Giants Hardware and Neural Architecture Co-Search with Hot Start

  • king
  • (0) Download
  • 20210506
  • Save

... pages left unread,continue reading

Document pages: 13 pages

Abstract: Hardware and neural architecture co-search that automatically generatesArtificial Intelligence (AI) solutions from a given dataset is promising topromote AI democratization; however, the amount of time that is required bycurrent co-search frameworks is in the order of hundreds of GPU hours for onetarget hardware. This inhibits the use of such frameworks on commodityhardware. The root cause of the low efficiency in existing co-search frameworksis the fact that they start from a "cold " state (i.e., search from scratch). Inthis paper, we propose a novel framework, namely HotNAS, that starts from a "hot " state based on a set of existing pre-trained models (a.k.a. model zoo) toavoid lengthy training time. As such, the search time can be reduced from 200GPU hours to less than 3 GPU hours. In HotNAS, in addition to hardware designspace and neural architecture search space, we further integrate a compressionspace to conduct model compressing during the co-search, which creates newopportunities to reduce latency but also brings challenges. One of the keychallenges is that all of the above search spaces are coupled with each other,e.g., compression may not work without hardware design support. To tackle thisissue, HotNAS builds a chain of tools to design hardware to supportcompression, based on which a global optimizer is developed to automaticallyco-search all the involved search spaces. Experiments on ImageNet dataset andXilinx FPGA show that, within the timing constraint of 5ms, neuralarchitectures generated by HotNAS can achieve up to 5.79 Top-1 and 3.97 Top-5accuracy gain, compared with the existing ones.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×