Monotonic Robust Policy Optimization with Model Discrepancy

Yuankun Jiang, Chenglin Li, Wenrui Dai, Junni Zou, Hongkai Xiong
2021 International Conference on Machine Learning  
State-of-the-art deep reinforcement learning (DRL) algorithms tend to overfit due to the model discrepancy between source and target environments. Though applying domain randomization during training can improve the average performance by randomly generating a sufficient diversity of environments in simulator, the worst-case environment is still neglected without any performance guarantee. Since the average and worstcase performance are both important for generalization in RL, in this paper, we
more » ... propose a policy optimization approach for concurrently improving the policy's performance in the average and worstcase environment. We theoretically derive a lower bound for the worst-case performance of a given policy by relating it to the expected performance. Guided by this lower bound, we formulate an optimization problem to jointly optimize the policy and sampling distribution, and prove that by iteratively solving it the worst-case performance is monotonically improved. We then develop a practical algorithm, named monotonic robust policy optimization (MRPO). Experimental evaluations in several robot control tasks demonstrate that MRPO can generally improve both the average and worst-case performance in the source environments for training, and facilitate in all cases the learned policy with a better generalization capability in some unseen testing environments.
dblp:conf/icml/JiangLDZX21 fatcat:upveei6fjfacdhvfvyliovd2hy