Model learning actor-critic algorithms: Performance evaluation in a motion control task

Ivo Grondman, Lucian Busoniu, Robert Babuska
2012 2012 IEEE 51st IEEE Conference on Decision and Control (CDC)  
Reinforcement learning (RL) control provides a means to deal with uncertainty and nonlinearity associated with control tasks in an optimal way. The class of actorcritic RL algorithms proved useful for control systems with continuous state and input variables. In the literature, modelbased actor-critic algorithms have recently been introduced to considerably speed up the the learning by constructing online a model through local linear regression (LLR). It has not been analyzed yet whether the
more » ... ed-up is due to the model learning structure or the LLR approximator. Therefore, in this paper we generalize the model learning actor-critic algorithms to make them suitable for use with an arbitrary function approximator. Furthermore, we present the results of an extensive analysis through numerical simulations of a typical nonlinear motion control problem. The LLR approximator is compared with radial basis functions (RBFs) in terms of the initial convergence rate and in terms of the final performance obtained. The results show that LLR-based actor-critic RL outperforms the RBF counterpart: it gives quick initial learning and comparable or even superior final control performance.
doi:10.1109/cdc.2012.6426427 dblp:conf/cdc/GrondmanBB12 fatcat:iihtxvfeg5bdfg6c4bsvksf4jm