eduzhai > Applied Sciences > Engineering >

Sound Field Translation and Mixed Source Model for Virtual Applications with Perceptual Validation

  • king
  • (0) Download
  • 20210505
  • Save

... pages left unread,continue reading

Document pages: 12 pages

Abstract: Non-interactive and linear experiences like cinema film offer high qualitysurround sound audio to enhance immersion, however the listener s experience isusually fixed to a single acoustic perspective. With the rise of virtualreality, there is a demand for recording and recreating real-world experiencesin a way that allows for the user to interact and move within the reproduction.Conventional sound field translation techniques take a recording and expand itinto an equivalent environment of virtual sources. However, the finite samplingof a commercial higher order microphone produces an acoustic sweet-spot in thevirtual reproduction. As a result, the technique remains to restrict thelistener s navigable region. In this paper, we propose a method for listenertranslation in an acoustic reproduction that incorporates a mixture ofnear-field and far-field sources in a sparsely expanded virtual environment. Weperceptually validate the method through a Multiple Stimulus with HiddenReference and Anchor (MUSHRA) experiment. Compared to the planewave benchmark,the proposed method offers both improved source localizability and robustnessto spectral distortions at translated positions. A cross-examination withnumerical simulations demonstrated that the sparse expansion relaxes theinherent sweet-spot constraint, leading to the improved localizability forsparse environments. Additionally, the proposed method is seen to betterreproduce the intensity and binaural room impulse response spectra ofnear-field environments, further supporting the strong perceptual results.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×