Uncertainty-Based Out-of-Distribution Detection in Deep Reinforcement Learning [article]

Andreas Sedlmeier, Thomas Gabor, Thomy Phan, Lenz Belzner, Claudia Linnhoff-Popien
2019 arXiv   pre-print
We consider the problem of detecting out-of-distribution (OOD) samples in deep reinforcement learning. In a value based reinforcement learning setting, we propose to use uncertainty estimation techniques directly on the agent's value estimating neural network to detect OOD samples. The focus of our work lies in analyzing the suitability of approximate Bayesian inference methods and related ensembling techniques that generate uncertainty estimates. Although prior work has shown that
more » ... variational inference techniques and bootstrap-based approaches can be used to model epistemic uncertainty, the suitability for detecting OOD samples in deep reinforcement learning remains an open question. Our results show that uncertainty estimation can be used to differentiate in- from out-of-distribution samples. Over the complete training process of the reinforcement learning agents, bootstrap-based approaches tend to produce more reliable epistemic uncertainty estimates, when compared to dropout-based approaches.
arXiv:1901.02219v1 fatcat:j6pxwgzknjakdg3hn77ykrv5wq