Faster reinforcement learning after pretraining deep networks to predict state dynamics

Charles W. Anderson, Minwoo Lee, Daniel L. Elliott
2015 2015 International Joint Conference on Neural Networks (IJCNN)  
Deep learning algorithms have recently appeared that pre-train hidden layers of neural networks in unsupervised ways, leading to state-of-the-art performance on large classification problems. These methods can also pre-train networks used for reinforcement learning. However, this ignores the additional information that exists in a reinforcement learning paradigm via the ongoing sequence of state, action, new state tuples. This paper demonstrates that learning a predictive model of state
more » ... can result in a pre-trained hidden layer structure that reduces the time needed to solve reinforcement learning problems.
doi:10.1109/ijcnn.2015.7280824 dblp:conf/ijcnn/AndersonLE15 fatcat:k2i272yv2jaj3a5ctrq725tlbm