##
###
Interior Methods for Nonlinear Optimization

Anders Forsgren, Philip E. Gill, Margaret H. Wright

2002
*
SIAM Review
*

Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interior-point techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of
## more »

... ment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar's widely publicized announcement in 1984 of a fast polynomial-time interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization. ). 525 1.1. Roots in Linear and Nonlinear Programming. Although the focus of this article is on nonlinearly constrained problems, understanding the context of the interior-point revolution requires a short digression on linear programming (minimization of a linear function subject to linear constraints). A fundamental property of well-behaved n-variable linear programs with m inequality constraints is that a vertex minimizer must exist, i.e., a point where n constraints with linearly independent gradients hold with equality. (See, e.g., [20, 92] for details about linear programming.) The simplex method, invented by Dantzig in 1947, is an iterative procedure that solves linear programs by exploiting this property. A simplex iteration moves from vertex to vertex, changing (one at a time) the set of constraints that hold exactly, decreasing the objective as it goes, until an optimal vertex is found. From the very start, the simplex method dominated the field of linear programming. Although "nonsimplex" strategies for linear programming were suggested and tried from time to time, they could not consistently match the simplex method in overall speed and reliability. Furthermore, a simplex-centric world view had the effect that even "new" techniques mimicked the motivation of the simplex method by always staying on a subset of exactly satisfied constraints. The preeminence of the simplex method was challenged not because of failures in practice-the simplex method was, and is, used routinely to solve enormous linear programs-but by worries about its computational complexity. One can argue that the simplex method and its progeny are inherently combinatorial, in that their performance seems to be bound in the worst case to the maximum number of ways in which n out of m constraints can hold with equality. In fact, with standard pivoting rules specifying the constraint to be dropped and added at each iteration, the simplex method can visit every vertex of the feasible region [64]; thus its worst-case complexity is exponential in the problem dimension. As a result, there was great interest in finding a polynomial-time linear programming algorithm. 1 The first success in this direction was achieved in 1979 by Khachian, whose ellipsoid method was derived from approaches proposed originally for nonlinear optimization. (See [92] for details about Khachian's method.) Despite its polynomial complexity bound, however, the ellipsoid method performed poorly in practice compared to the simplex method, and the search continued for a polynomial-time linear programming method that was genuinely fast in running time. The start of the interior-point revolution was Karmarkar's announcement [63] in 1984 of a polynomial-time linear programming method that was 50 times faster than the simplex method. Amid the frenzy of interest in Karmarkar's method, it was shown in 1985 [51] that there was a formal equivalence between Karmarkar's method and the classical logarithmic barrier method (see sections 1.2 and 3) applied to linear programming, and long-discarded barrier methods were soon rejuvenated as polynomial-time algorithms for linear programming. Furthermore, barrier methods (unlike the simplex method) could be applied not only to linear programming 2. Inequality-Constrained Optimization. We begin with problems containing only inequality constraints:

doi:10.1137/s0036144502414942
fatcat:6l5ip4vjkndztbzzr26fdnapvu