eduzhai > Applied Sciences > Engineering >

End-to-End Neural Transformer Based Spoken Language Understanding

  • king
  • (0) Download
  • 20210507
  • Save

... pages left unread,continue reading

Document pages: 5 pages

Abstract: Spoken language understanding (SLU) refers to the process of inferring thesemantic information from audio signals. While the neural transformersconsistently deliver the best performance among the state-of-the-art neuralarchitectures in field of natural language processing (NLP), their merits in aclosely related field, i.e., spoken language understanding (SLU) have not beedinvestigated. In this paper, we introduce an end-to-end neuraltransformer-based SLU model that can predict the variable-length domain,intent, and slots vectors embedded in an audio signal with no intermediatetoken prediction architecture. This new architecture leverages theself-attention mechanism by which the audio signal is transformed to varioussub-subspaces allowing to extract the semantic context implied by an utterance.Our end-to-end transformer SLU predicts the domains, intents and slots in theFluent Speech Commands dataset with accuracy equal to 98.1 , 99.6 , and99.6 , respectively and outperforms the SLU models that leverage acombination of recurrent and convolutional neural networks by 1.4 while thesize of our model is 25 smaller than that of these architectures.Additionally, due to independent sub-space projections in the self-attentionlayer, the model is highly parallelizable which makes it a good candidate foron-device SLU.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×