Robustifying Reinforcement Learning Policies with ℒ_1 Adaptive Control [article]

Yikun Cheng, Pan Zhao, Manan Gandhi, Bo Li, Evangelos Theodorou, Naira Hovakimyan
2022 arXiv   pre-print
A reinforcement learning (RL) policy trained in a nominal environment could fail in a new/perturbed environment due to the existence of dynamic variations. Existing robust methods try to obtain a fixed policy for all envisioned dynamic variation scenarios through robust or adversarial training. These methods could lead to conservative performance due to emphasis on the worst case, and often involve tedious modifications to the training environment. We propose an approach to robustifying a
more » ... ained non-robust RL policy with ℒ_1 adaptive control. Leveraging the capability of an ℒ_1 control law in the fast estimation of and active compensation for dynamic variations, our approach can significantly improve the robustness of an RL policy trained in a standard (i.e., non-robust) way, either in a simulator or in the real world. Numerical experiments are provided to validate the efficacy of the proposed approach.
arXiv:2106.02249v5 fatcat:v5t7eoyuivbhzmtonskroio2vm