Robust Control via Sequential Semidefinite Programming

B. Fares, D. Noll, P. Apkarian
2002 SIAM Journal of Control and Optimization  
This paper discusses nonlinear optimization techniques in robust control synthesis, with special emphasis on design problems which may be cast as minimizing a linear objective function under linear matrix inequality (LMI) constraints in tandem with nonlinear matrix equality constraints. The latter type of constraints renders the design numerically and algorithmically difficult. We solve the optimization problem via sequential semidefinite programming (SSDP), a technique which expands on
more » ... expands on sequential quadratic programming (SQP) known in nonlinear optimization. Global and fast local convergence properties of SSDP are similar to those of SQP, and SSDP is conveniently implemented with available semidefinite programming (SDP) solvers. Using two test examples, we compare SSDP to the augmented Lagrangian method, another classical scheme in nonlinear optimization, and to an approach using concave optimization. heuristics and ad hoc methods have been developed over recent years to obtain suboptimal solutions to (D). Methods currently employed are usually coordinate descent schemes, which alternatively and iteratively fix parts of the coordinates of the decision vector, x, trying to optimize the remaining indices. The D-K (scaling-controller) iteration procedure is an example of this type [6, 37] , whose popularity may be attributed to the fact that it is conceptually simple and easily implemented as long as the intermediate steps are convex LMI programs. The latter may often be guaranteed through an appropriate choice of the decision variables held fixed at each step. However, a major drawback of coordinate descent schemes is that they almost always fail to converge, even for starting points close to a local solution (see [22] ). As a result, controllers obtained via such methods are highly questionable and bear the risk of unnecessary conservatism. A new optimization approach to robust control design was initiated in [5] , where the authors showed that reduced-order H ∞ control could be cast as a concave minimization problem. It was observed, however, that in a number of cases local concave minimization, which is known to be numerically difficult, produced unsatisfactory results. This occurs, in particular, when iterations get stalled, which is probably due to the lack of second-order information. In [16], we therefore proposed a different approach to (D), again based on nonlinear optimization techniques. The augmented Lagrangian method from nonlinear optimization was successfully extended to program (D). The difficult nonlinear constraints were incorporated into an augmented Lagrangian function, while the LMI constraints, due to their linear structure, were kept explicitly during optimization. A Newton-type method including a line search, or, alternatively, a trust-region strategy, was shown to work if the penalty parameters were appropriately increased at each step, and if the so-called first-order update rule for the Lagrange multiplier estimates (cf. [9]) was used. The disadvantage of the augmented Lagrangian method is that its convergence is at best linear if the penalty parameter c is held fixed. Superlinear convergence is guaranteed if c → ∞, but the use of large c, due to the inevitable ill-conditioning, is prohibitive in practice. The present investigation therefore aims at adapting methods with better convergence properties, like sequential quadratic programming (SQP), to the case of LMI constrained problems. Minimizing at each step the second-order Taylor expansion of the Lagrangian of (D) about the current iterate defines the tangent subproblem, (T ), whose solution will provide the next iterate. Due to the constraints A(x) ≤ 0, (T ) is not a quadratic program, as in the case of SQP, but requires minimizing a quadratic objective function under LMI constraints. After convexification of the objective, (T ) may be turned into a semidefinite program, conveniently solved with current LMI tools (cf., for instance, [20, 36] ). We refer to this approach as sequential semidefinite programming (SSDP). It will be discussed in section 4, and a local convergence analysis will be presented in section 5. Although more complex than most coordinate descent schemes, the advantages of the new approach are at hand: • The entire vector x of decision variables is updated at each step, so, for instance, we do not have to separate Lyapunov and scaling variables from controller variables. • Like SQP, SSDP is guaranteed to converge globally, which means, for an arbitrary and possibly remote initial guess, if an appropriate line search or trust region strategy is applied. • Being of second-order type, the rate of convergence of SSDP is superlinear in
doi:10.1137/s0363012900373483 fatcat:xqznjadmtrcwlavvuuswiaj4te