eduzhai > Applied Sciences > Engineering >

A Loss Function for Generative Neural Networks Based on Watsons Perceptual Model

  • Save

... pages left unread,continue reading

Document pages: 17 pages

Abstract: To train Variational Autoencoders (VAEs) to generate realistic imageryrequires a loss function that reflects human perception of image similarity. Wepropose such a loss function based on Watson s perceptual model, which computesa weighted distance in frequency space and accounts for luminance and contrastmasking. We extend the model to color images, increase its robustness totranslation by using the Fourier Transform, remove artifacts due to splittingthe image into blocks, and make it differentiable. In experiments, VAEs trainedwith the new loss function generated realistic, high-quality image samples.Compared to using the Euclidean distance and the Structural Similarity Index,the images were less blurry; compared to deep neural network based losses, thenew approach required less computational resources and generated images withless artifacts.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×