Experimental analysis of eligibility traces strategies in temporal difference learning

Jinsong Leng, Lakhmi Jain, Colin Fyfe
2009 International Journal of Knowledge Engineering and Soft Data Paradigms  
Temporal difference (TD) learning is a model-free reinforcement learning technique, which adopts an infinite horizon discount model and uses an incremental learning technique for dynamic programming. The state value function is updated in terms of sample episodes. Utilising eligibility traces is a key mechanism in enhancing the rate of convergence. TD(λ) represents the use of eligibility traces by introducing the parameter λ. However, the underlying mechanism of eligibility traces with an
more » ... imation function has not been well understood, either from theoretical point of view or from practical point of view. The TD(λ) method has been proved to be convergent with local tabular state representation. Unfortunately, proving convergence of TD(λ) with function approximation is still an important open theoretical question. This paper aims to investigate the convergence and the effects of different eligibility traces. In this paper, we adopt Sarsa(λ) learning control algorithm with a large, stochastic and dynamic simulation environment called SoccerBots. The state value function is represented by a linear approximation function known as tile coding. The performance metrics generated from the simulation system can be used to analyse the mechanism of eligibility traces.
doi:10.1504/ijkesdp.2009.021982 fatcat:5o3yhthfrfbzhek5vvjaprgrvq