Challenges for the policy representation when applying reinforcement learning in robotics

Petar Kormushev, Sylvain Calinon, Darwin G. Caldwell, Barkan Ugurlu
2012 The 2012 International Joint Conference on Neural Networks (IJCNN)  
A summary of the state-of-the-art reinforcement learning in robotics is given, in terms of both algorithms and policy representations. Numerous challenges faced by the policy representation in robotics are identified. Two recent examples for application of reinforcement learning to robots are described: pancake flipping task and bipedal walking energy minimization task. In both examples, a state-of-the-art Expectation-Maximization-based reinforcement learning algorithm is used, but different
more » ... d, but different policy representations are proposed and evaluated for each task. The two proposed policy representations offer viable solutions to four rarely-addressed challenges in policy representations: correlations, adaptability, multi-resolution, and globality. Both the successes and the practical difficulties encountered in these examples are discussed.
doi:10.1109/ijcnn.2012.6252758 dblp:conf/ijcnn/KormushevCCU12 fatcat:zlwj6m6cfzhy3bjx7n2udhcbzy