Newton-conjugate-gradient methods for solitary wave computations

Jianke Yang
2009 Journal of Computational Physics  
In this paper, the Newton-conjugate-gradient methods are developed for solitary wave computations. These methods are based on Newton iterations, coupled with conjugategradient iterations to solve the resulting linear Newton-correction equation. When the linearization operator is self-adjoint, the preconditioned conjugate-gradient method is proposed to solve this linear equation. If the linearization operator is non-self-adjoint, the preconditioned biconjugate-gradient method is proposed to
more » ... the linear equation. The resulting methods are applied to compute both the ground states and excited states in a large number of physical systems such as the two-dimensional NLS equations with and without periodic potentials, the fifth-order KdV equation, and the fifth-order KP equation. Numerical results show that these proposed methods are faster than the other leading numerical methods, often by orders of magnitude. In addition, these methods are very robust and always converge in all the examples being tested. Furthermore, they are very easy to implement. It is also shown that the nonlinear conjugate gradient methods are not robust and inferior to the proposed methods. first proposed in the 1970s [4] and later generalized in [5, 6] . It is based on the fixed-point iteration idea, but with a key improvement which is to introduce a stabilizing factor. This method became popular in recent years due to its easy implementation in arbitrary spatial dimensions as well as fast convergence in many situations. However, it only converges to the ground states of nonlinear wave equations, and would diverge for excited states [6, 12] . The imaginary time evolution method is also a familiar method, especially in the physics community (see [13] for instance). But its original version is very slow, and its accelerated versions were developed only recently [7, 8] . These methods are based on the idea of turning the stationary solitary wave computation problem into a time evolution problem of diffusion type, and normalize the solution by its power or amplitude at each evolution step. One important component of these methods is to introduce an acceleration operator to the time evolution equation, which would improve the convergence speed dramatically. These methods are also very easy to implement in arbitrary spatial dimensions, and their convergence is either faster than or competitive with the Petviashvili-type methods [8] . However, these methods generally can only converge to the ground states just like the Petviashvili-type methods [8] . In order to compute excited states, the squared-operator iteration methods were developed in [9] . These methods are based on the idea of time-evolving a "squared" operator equation (the power or amplitude normalization is optional). Evolution of this squared equation guarantees that these methods can converge to any solitary wave, including excited states. The acceleration operator is built inside the squared equation to improve convergence speeds. When another mode elimination technique is incorporated into these methods [14] , the resulting method (called the modified squaredoperator method in [9]) converges even faster. These squared-operator-type methods are also easy to implement for general nonlinear wave equations in arbitrary spatial dimensions, and they deliver satisfactory performances in many situations [9] . But there are situations where all the above methods can be quite slow, especially when the wave's propagation constant gets near the edge of the continuous spectrum so that the wave gets less localized (see Examples 3.4 and 3.5 later in the text). Thus even faster numerical methods are still called upon. On a separate development, the conjugate-gradient method was developed in the early 1950s and has become the most prominent iterative method for solving large systems of linear equations nowadays [15] [16] [17] [18] . Viewing the linear equation as a minimization problem of a quadratic form, this method uses conjugate directions instead of the local gradient for going downhill. The conjugate-gradient method has a number of important properties. One property is that for symmetric positive-definite matrices, this method gives the exact solution within n steps, where n is the size of the matrix [16] [17] [18] . Another property is that for symmetric positive-definite matrices, the matrix-weighted error decreases monotonically with each iteration [16, 17] . The third property is that when the matrix size is large, this method often gives the solution to the required accuracy in much less than n steps, especially when a suitable preconditioning matrix is introduced [16,17]. The conjugategradient method was originally developed for linear equations with symmetric positive-definite matrices, but some practical applications show that this method can also solve linear equations with symmetric indefinite matrices (unless there is a breakdown due to division by zero during iterations which rarely occurs) [19] . Generalizations of the conjugate-gradient method to symmetric indefinite matrices, non-symmetric matrices and nonlinear systems have also been developed [20] [21] [22] [23] [24] [25] . At the moment, no work has appeared in the literature to apply the conjugate-gradient method to solitary wave computations. It is not clear yet whether this method can be applied to solitary waves. If so, what scheme is the most efficient for this application? In addition, would this method perform better than the other leading numerical methods for solitary waves as described above? In this paper, we apply the conjugate-gradient methods to the computation of general solitary waves (both the ground state and excited states) in nonlinear wave equations. The guiding principles in our algorithm design are fast convergence and easy implementation, which are equally weighed. We first linearize the solitary wave equation around an iterated solution and update the solution by solving a linear inhomogeneous operator equation, which resembles the idea of the Newton's method. Then, instead of solving this linear equation by direct methods as in the traditional Newton's method, we use the conjugate-gradient-type methods to solve it. If the linearization operator is self-adjoint, we use the preconditioned conjugate-gradient method to solve this linear equation. This method will be called the Newton-CG method in this paper. If the linearization operator is non-self-adjoint, we use the preconditioned biconjugate-gradient method to solve this linear equation. This method will be called the Newton-BCG method in this paper. We show that both methods converge for the ground state as well as the excited states of a wave system. In addition, they are very robust and converge in all our numerical testings with various physical wave systems and wide ranges of initial conditions (as long as the initial condition is reasonably close to the exact solution). No breakdown of these methods is ever observed (even though it is theoretically possible). The performance of these methods is demonstrated on a number of physical models such as the two-dimensional nonlinear Schrödinger (NLS) equations with and without periodic potentials, the fifth-order Kortewegde Vries (KdV) equation, and the fifth-order KadomtsevPetviashvili (KP) equation. They are found to converge much faster than the other leading numerical methods, often by orders of magnitude. In addition, these methods are very easy to implement regardless of the number of dimensions (a sample MATLAB code of the Newton-CG method will be displayed in Appendix A). Furthermore, we show that these Newton-CG/BCG methods, which are based on conjugate gradient iterations on a linear equation, are much better than the nonlinear conjugate-gradient methods which are sensitive to initial conditions. We expect that these proposed Newton-CG/BCG methods will replace the existing numerical methods and become the premier methods for computing both the ground-state and excited-state solitary waves in the days to come. We would like to make a few remarks to put our results in a broader context. The combination of Newton-type methods (for solving nonlinear equations) and Krylov subspace methods (for solving the resulting linear Newton-correction equations) is a well known technique. In the literature, these methods are often referred to as the Newton-Krylov methods
doi:10.1016/ fatcat:xerruhastfhmlmon2wloa4amfu