Learning deep control policies for autonomous aerial vehicles with MPC-guided policy search

Tianhao Zhang, Gregory Kahn, Sergey Levine, Pieter Abbeel
2016 2016 IEEE International Conference on Robotics and Automation (ICRA)  
Model predictive control (MPC) is an effective method for controlling robotic systems, particularly autonomous aerial vehicles such as quadcopters. However, application of MPC can be computationally demanding, and typically requires estimating the state of the system, which can be challenging in complex, unstructured environments. Reinforcement learning can in principle forego the need for explicit state estimation and acquire a policy that directly maps sensor readings to actions, but is
more » ... ult to apply to unstable systems that are liable to fail catastrophically during training before an effective policy has been found. We propose to combine MPC with reinforcement learning in the framework of guided policy search, where MPC is used to generate data at training time, under full state observations provided by an instrumented training environment. This data is used to train a deep neural network policy, which is allowed to access only the raw observations from the vehicle's onboard sensors. After training, the neural network policy can successfully control the robot without knowledge of the full state, and at a fraction of the computational cost of MPC. We evaluate our method by learning obstacle avoidance policies for a simulated quadrotor, using simulated onboard sensors and no explicit state estimation at test time.
doi:10.1109/icra.2016.7487175 dblp:conf/icra/ZhangKLA16 fatcat:4mryusn4sbfxdpbifmo7gmguau