MODIFIED LIMITED MEMORY BFGS METHOD WITH NONMONOTONE LINE SEARCH FOR UNCONSTRAINED OPTIMIZATION

Gonglin Yuan, Zengxin Wei, Yanlin Wu
2010 Journal of the Korean Mathematical Society  
In this paper, we propose two limited memory BFGS algorithms with a nonmonotone line search technique for unconstrained optimization problems. The global convergence of the given methods will be established under suitable conditions. Numerical results show that the presented algorithms are more competitive than the normal BFGS method. 767 768 GONGLIN YUAN, ZENGXIN WEI, AND YANLIN WU the Hessian matrix of f (x) at current iteration. Moreover, the exact solution of the system (1.3) could be too
more » ... rdensome, or is not necessary when x k is far from a solution [31] . Inexact Newton methods [8, 31] represent the basic approach underlying most of the Newton-type large-scale algorithms. At each iteration, the current estimate of the solution is updated by approximately solving the linear system (1.3) using an iterative algorithm. The inner iteration is typically "truncated" before the solution to the linear system is obtained. The limited memory BFGS (L-BFGS) method (see [3] ) is an adaptation of the BFGS method for large-scale problems. The implementation is almost identical to that of the standard BFGS method, the only difference is that the inverse Hessian approximation is not formed explicitly, but defined by a small number of BFGS updates. It is often provided a fast rate of linear convergence, and requires minimal storage. Since the standard BFGS is wildly used to solve general minimization problems, most of the studies concerning limited memory methods are concentrate on the L-BFGS method. We know that, the BFGS update exploits only the gradient information, while the information of function values available is neglected. Therefore, many efficient attempts have been made to modify the usual quasi-Newton methods using both the gradient and function values information (e.g. [41, 51] ). Lately, in order to get a higher order accuracy in approximating the second curvature of the objective function, Wei, Li, and Qi [41] , and Zhang, Deng, and Chen [51] proposed modified BFGS-type methods for (1.1), and the reported numerical results show that the average performance is better than that of the standard BFGS method, respectively. The monotone line search technique is often used to get the stepsize α k , however monotonicity may cause a series of very small steps if the contours of objective function are a family of curves with large curvature [18] . More recently, the nonmonotonic line search for solving unconstrained optimization is proposed by Grippo et al. in [18]. Han and Liu [21] presented a new nonmonotone BFGS method for (1.1). The global convergence of the convex objective function was established. Numerical results show that this method is more competitive to the normal BFGS method with monotone line search. We [49] proved its superlinear convergence. Motivated by the above observation, we propose two limited memory BFGStype method on the basic of Wei et al. [41], Zhang et al. [51], and [21], respectively, which are suitable for solving large-scale unconstrained optimization problems. The major contribution of this paper is an extension of the BFGS-type method in [41], [51] , and the nonmonotone line search technique to limited memory scheme. The triple of the standard L-BFGS method {s i , y i }, i = k − m + 1, . . . , k, is stored, where s i = x i+1 − x i , y i = g i+1 − g i , g i = g(x i ) and g i+1 = g(x i+1 ) are the gradient of f (x) at x i and x i+1 , respectively, m > 0 is a constant. A distinguishing feature of our proposed L-BFGS method is that,
doi:10.4134/jkms.2010.47.4.767 fatcat:fxdw4pj5lnb67hk4jw2sjcc6hm