Abstraction Selection in Model-based Reinforcement Learning

Nan Jiang, Alex Kulesza, Satinder P. Singh
2015 International Conference on Machine Learning  
State abstractions are often used to reduce the complexity of model-based reinforcement learning when only limited quantities of data are available. However, choosing the appropriate level of abstraction is an important problem in practice. Existing approaches have theoretical guarantees only under strong assumptions on the domain or asymptotically large amounts of data, but in this paper we propose a simple algorithm based on statistical hypothesis testing that comes with a finite-sample
more » ... tee under assumptions on candidate abstractions. Our algorithm trades off the low approximation error of finer abstractions against the low estimation error of coarser abstractions, resulting in a loss bound that depends only on the quality of the best available abstraction and is polynomial in planning horizon.
dblp:conf/icml/JiangKS15 fatcat:7lnu2qhw6fhlfboemeh2m4zgj4