eduzhai > Applied Sciences > Engineering >

Large-scale Transfer Learning for Low-resource Spoken Language Understanding

  • king
  • (0) Download
  • 20210506
  • Save

... pages left unread,continue reading

Document pages: 5 pages

Abstract: End-to-end Spoken Language Understanding (SLU) models are made increasinglylarge and complex to achieve the state-ofthe-art accuracy. However, theincreased complexity of a model can also introduce high risk of over-fitting,which is a major challenge in SLU tasks due to the limitation of availabledata. In this paper, we propose an attention-based SLU model together withthree encoder enhancement strategies to overcome data sparsity challenge. Thefirst strategy focuses on the transferlearning approach to improve featureextraction capability of the encoder. It is implemented by pre-training theencoder component with a quantity of Automatic Speech Recognition annotateddata relying on the standard Transformer architecture and then fine-tuning theSLU model with a small amount of target labelled data. The second strategyadopts multitask learning strategy, the SLU model integrates the speechrecognition model by sharing the same underlying encoder, such that improvingrobustness and generalization ability. The third strategy, learning fromComponent Fusion (CF) idea, involves a Bidirectional Encoder Representationfrom Transformer (BERT) model and aims to boost the capability of the decoderwith an auxiliary network. It hence reduces the risk of over-fitting andaugments the ability of the underlying encoder, indirectly. Experiments on theFluentAI dataset show that cross-language transfer learning and multi-taskstrategies have been improved by up to 4:52 and 3:89 respectively, comparedto the baseline.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×