eduzhai > Applied Sciences > Engineering >

Multilingual Jointly Trained Acoustic and Written Word Embeddings

  • Save

... pages left unread,continue reading

Document pages: 5 pages

Abstract: Acoustic word embeddings (AWEs) are vector representations of spoken wordsegments. AWEs can be learned jointly with embeddings of character sequences,to generate phonetically meaningful embeddings of written words, oracoustically grounded word embeddings (AGWEs). Such embeddings have been usedto improve speech retrieval, recognition, and spoken term discovery. In thiswork, we extend this idea to multiple low-resource languages. We jointly trainan AWE model and an AGWE model, using phonetically transcribed data frommultiple languages. The pre-trained models can then be used for unseenzero-resource languages, or fine-tuned on data from low-resource languages. Wealso investigate distinctive features, as an alternative to phone labels, tobetter share cross-lingual information. We test our models on worddiscrimination tasks for twelve languages. When trained on eleven languages andtested on the remaining unseen language, our model outperforms traditionalunsupervised approaches like dynamic time warping. After fine-tuning thepre-trained models on one hour or even ten minutes of data from a new language,performance is typically much better than training on only the target-languagedata. We also find that phonetic supervision improves performance overcharacter sequences, and that distinctive feature supervision is helpful inhandling unseen phones in the target language.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×