eduzhai > Applied Sciences > Engineering >

Multi-speaker Text-to-speech Synthesis Using Deep Gaussian Processes

  • king
  • (0) Download
  • 20210506
  • Save

... pages left unread,continue reading

Document pages: 5 pages

Abstract: Multi-speaker speech synthesis is a technique for modeling multiple speakers voices with a single model. Although many approaches using deep neural networks(DNNs) have been proposed, DNNs are prone to overfitting when the amount oftraining data is limited. We propose a framework for multi-speaker speechsynthesis using deep Gaussian processes (DGPs); a DGP is a deep architecture ofBayesian kernel regressions and thus robust to overfitting. In this framework,speaker information is fed to duration acoustic models using speaker codes. Wealso examine the use of deep Gaussian process latent variable models (DGPLVMs).In this approach, the representation of each speaker is learned simultaneouslywith other model parameters, and therefore the similarity or dissimilarity ofspeakers is considered efficiently. We experimentally evaluated two situationsto investigate the effectiveness of the proposed methods. In one situation, theamount of data from each speaker is balanced (speaker-balanced), and in theother, the data from certain speakers are limited (speaker-imbalanced).Subjective and objective evaluation results showed that both the DGP and DGPLVMsynthesize multi-speaker speech more effective than a DNN in thespeaker-balanced situation. We also found that the DGPLVM outperforms the DGPsignificantly in the speaker-imbalanced situation.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×