Regret Bounds for Reinforcement Learning via Markov Chain Concentration

Ronald Ortner
2020 The Journal of Artificial Intelligence Research  
We give a simple optimistic algorithm for which it is easy to derive regret bounds of O(sqrt{t-mix SAT}) steps in uniformly ergodic Markov decision processes with S states, A actions, and mixing time parameter t-mix. These bounds are the first regret bounds in the general, non-episodic setting with an optimal dependence on all given parameters. They could only be improved by using an alternative mixing time parameter.
doi:10.1613/jair.1.11316 fatcat:nujxxagu7vc2fhybo2v6ltnk3q