eduzhai > Applied Sciences > Engineering >

Langevin Dynamics for Adaptive Inverse Reinforcement Learning of Stochastic Gradient Algorithms

  • Save

... pages left unread,continue reading

Document pages: 33 pages

Abstract: Inverse reinforcement learning (IRL) aims to estimate the reward function ofoptimizing agents by observing their response (estimates or actions). Thispaper considers IRL when noisy estimates of the gradient of a reward functiongenerated by multiple stochastic gradient agents are observed. We present ageneralized Langevin dynamics algorithm to estimate the reward function$R( theta)$; specifically, the resulting Langevin algorithm asymptoticallygenerates samples from the distribution proportional to $ exp(R( theta))$. Theproposed IRL algorithms use kernel-based passive learning schemes. We alsoconstruct multi-kernel passive Langevin algorithms for IRL which are suitablefor high dimensional data. The performance of the proposed IRL algorithms areillustrated on examples in adaptive Bayesian learning, logistic regression(high dimensional problem) and constrained Markov decision processes. We proveweak convergence of the proposed IRL algorithms using martingale averagingmethods. We also analyze the tracking performance of the IRL algorithms innon-stationary environments where the utility function $R( theta)$ jump changesover time as a slow Markov chain.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×