eduzhai > Applied Sciences > Engineering >

Sample Complexity of Asynchronous Q-Learning Sharper Analysis and Variance Reduction

  • Save

... pages left unread,continue reading

Document pages: 37 pages

Abstract: Asynchronous Q-learning aims to learn the optimal action-value function (orQ-function) of a Markov decision process (MDP), based on a single trajectory ofMarkovian samples induced by a behavior policy. Focusing on a$ gamma$-discounted MDP with state space $ mathcal{S}$ and action space$ mathcal{A}$, we demonstrate that the $ ell { infty}$-based sample complexityof classical asynchronous Q-learning -- namely, the number of samples needed toyield an entrywise $ varepsilon$-accurate estimate of the Q-function -- is atmost on the order of begin{equation*} frac{1}{ mu { mathsf{min}}(1- gamma)^5 varepsilon^2}+ frac{t { mathsf{mix}}}{ mu { mathsf{min}}(1- gamma)} end{equation*} up tosome logarithmic factor, provided that a proper constant learning rate isadopted. Here, $t { mathsf{mix}}$ and $ mu { mathsf{min}}$ denote respectivelythe mixing time and the minimum state-action occupancy probability of thesample trajectory. The first term of this bound matches the complexity in thecase with independent samples drawn from the stationary distribution of thetrajectory. The second term reflects the expense taken for the empiricaldistribution of the Markovian trajectory to reach a steady state, which isincurred at the very beginning and becomes amortized as the algorithm runs.Encouragingly, the above bound improves upon the state-of-the-art result by afactor of at least $| mathcal{S}|| mathcal{A}|$. Further, the scaling on thediscount complexity can be improved by means of variance reduction.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×