On the Optimization Landscape of Dynamical Output Feedback Linear Quadratic Control [article]

Jingliang Duan, Wenhan Cao, Yang Zheng, Lin Zhao
2022
The optimization landscape of optimal control problems plays an important role in the convergence of many policy gradient methods. Unlike state-feedback Linear Quadratic Regulator (LQR), static output-feedback policies are typically insufficient to achieve good closed-loop control performance. We investigate the optimization landscape of linear quadratic control using dynamical output feedback policies, denoted as dynamical LQR (dLQR) in this paper. We first show that the dLQR cost varies with
more » ... imilarity transformations. We then derive an explicit form of the optimal similarity transformation for a given observable stabilizing controller. We further characterize the unique observable stationary point of dLQR. This provides an optimality certificate for policy gradient methods under mild assumptions. Finally, we discuss the differences and connections between dLQR and the canonical linear quadratic Gaussian (LQG) control. These results shed light on designing policy gradient algorithms for decision-making problems with partially observed information.
doi:10.48550/arxiv.2201.09598 fatcat:pvrax7e3fvdrnpscu7mnglg5dm