eduzhai > Applied Sciences > Engineering >

Dreaming Model-based Reinforcement Learning by Latent Imagination without Reconstruction

  • king
  • (0) Download
  • 20210506
  • Save

... pages left unread,continue reading

Document pages: 7 pages

Abstract: In the present paper, we propose a decoder-free extension of Dreamer, aleading model-based reinforcement learning (MBRL) method from pixels. Dreameris a sample- and cost-efficient solution to robot learning, as it is used totrain latent state-space models based on a variational autoencoder and toconduct policy optimization by latent trajectory imagination. However, thisautoencoding based approach often causes object vanishing, in which theautoencoder fails to perceives key objects for solving control tasks, and thussignificantly limiting Dreamer s potential. This work aims to relieve thisDreamer s bottleneck and enhance its performance by means of removing thedecoder. For this purpose, we firstly derive a likelihood-free and InfoMaxobjective of contrastive learning from the evidence lower bound of Dreamer.Secondly, we incorporate two components, (i) independent linear dynamics and(ii) the random crop data augmentation, to the learning scheme so as to improvethe training performance. In comparison to Dreamer and other recent model-freereinforcement learning methods, our newly devised Dreamer with InfoMax andwithout generative decoder (Dreaming) achieves the best scores on 5 difficultsimulated robotics tasks, in which Dreamer suffers from object vanishing.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×