eduzhai > Applied Sciences > Engineering >

Learned Transferable Architectures Can Surpass Hand-Designed Architectures for Large Scale Speech Recognition

  • king
  • (0) Download
  • 20210507
  • Save

... pages left unread,continue reading

Document pages: 5 pages

Abstract: In this paper, we explore the neural architecture search (NAS) for automaticspeech recognition (ASR) systems. With reference to the previous works in thecomputer vision field, the transferability of the searched architecture is themain focus of our work. The architecture search is conducted on the small proxydataset, and then the evaluation network, constructed with the searchedarchitecture, is evaluated on the large dataset. Especially, we propose arevised search space for speech recognition tasks which theoreticallyfacilitates the search algorithm to explore the architectures with lowcomplexity. Extensive experiments show that: (i) the architecture searched onthe small proxy dataset can be transferred to the large dataset for the speechrecognition tasks. (ii) the architecture learned in the revised search spacecan greatly reduce the computational overhead and GPU memory usage with mildperformance degradation. (iii) the searched architecture can achieve more than20 and 15 (average on the four test sets) relative improvements respectivelyon the AISHELL-2 dataset and the large (10k hours) dataset, compared with ourbest hand-designed DFSMN-SAN architecture. To the best of our knowledge, thisis the first report of NAS results with large scale dataset (up to 10K hours),indicating the promising application of NAS to industrial ASR systems.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×