Robust Asymmetric Learning in POMDPs [article]

Andrew Warrington and J. Wilder Lavington and Adam Ścibior and Mark Schmidt and Frank Wood
2021 arXiv   pre-print
Policies for partially observed Markov decision processes can be efficiently learned by imitating policies for the corresponding fully observed Markov decision processes. Unfortunately, existing approaches for this kind of imitation learning have a serious flaw: the expert does not know what the trainee cannot see, and so may encourage actions that are sub-optimal, even unsafe, under partial information. We derive an objective to instead train the expert to maximize the expected reward of the
more » ... itating agent policy, and use it to construct an efficient algorithm, adaptive asymmetric DAgger (A2D), that jointly trains the expert and the agent. We show that A2D produces an expert policy that the agent can safely imitate, in turn outperforming policies learned by imitating a fixed expert.
arXiv:2012.15566v3 fatcat:etbg3phqnvgdtfcm2ctbawhane