eduzhai > Applied Sciences > Engineering >

Momentum-Based Policy Gradient Methods

  • king
  • (0) Download
  • 20210506
  • Save

... pages left unread,continue reading

Document pages: 24 pages

Abstract: In the paper, we propose a class of efficient momentum-based policy gradientmethods for the model-free reinforcement learning, which use adaptive learningrates and do not require any large batches. Specifically, we propose a fastimportant-sampling momentum-based policy gradient (IS-MBPG) method based on anew momentum-based variance reduced technique and the importance samplingtechnique. We also propose a fast Hessian-aided momentum-based policy gradient(HA-MBPG) method based on the momentum-based variance reduced technique and theHessian-aided technique. Moreover, we prove that both the IS-MBPG and HA-MBPGmethods reach the best known sample complexity of $O( epsilon^{-3})$ forfinding an $ epsilon$-stationary point of the non-concave performance function,which only require one trajectory at each iteration. In particular, we presenta non-adaptive version of IS-MBPG method, i.e., IS-MBPG*, which also reachesthe best known sample complexity of $O( epsilon^{-3})$ without any largebatches. In the experiments, we apply four benchmark tasks to demonstrate theeffectiveness of our algorithms.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×