eduzhai > Applied Sciences > Engineering >

Speech To Semantics Improve ASR and NLU Jointly via All-Neural Interfaces

  • king
  • (0) Download
  • 20210507
  • Save

... pages left unread,continue reading

Document pages: 5 pages

Abstract: We consider the problem of spoken language understanding (SLU) of extractingnatural language intents and associated slot arguments or named entities fromspeech that is primarily directed at voice assistants. Such a system subsumesboth automatic speech recognition (ASR) as well as natural languageunderstanding (NLU). An end-to-end joint SLU model can be built to a requiredspecification opening up the opportunity to deploy on hardware constrainedscenarios like devices enabling voice assistants to work offline, in a privacypreserving manner, whilst also reducing server costs.We first present models that extract utterance intent directly from speechwithout intermediate text output. We then present a compositional model, whichgenerates the transcript using the Listen Attend Spell ASR system and thenextracts interpretation using a neural NLU model. Finally, we contrast thesemethods to a jointly trained end-to-end joint SLU model, consisting of ASR andNLU subsystems which are connected by a neural network based interface insteadof text, that produces transcripts as well as NLU interpretation. We show thatthe jointly trained model shows improvements to ASR incorporating semanticinformation from NLU and also improves NLU by exposing it to ASR confusionencoded in the hidden layer.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×