Backward Error of Polynomial Eigenvalue Problems Solved by Linearization of Lagrange Interpolants

Piers W. Lawrence, Robert M. Corless
2015 SIAM Journal on Matrix Analysis and Applications  
This article considers the backward error of the solution of polynomial eigenvalue problems expressed as Lagrange interpolants. One of the most common strategies to solve polynomial eigenvalue problems is to linearize, which is to say that the polynomial eigenvalue problem is transformed into an equivalent larger linear eigenvalue problem, and solved using any appropriate eigensolver. Much of the existing literature on the backward error of polynomial eigenvalue problems focuses on polynomials
more » ... xpressed in the classical monomial basis. Hence, the objective of this article is to carry out the necessary extensions for polynomials expressed in the Lagrange basis. We construct one-sided factorizations that give simple expressions relating the eigenvectors of the linearization to the eigenvectors of the polynomial eigenvalue problem. Using these relations, we are able to bound the backward error of an approximate eigenpair of the polynomial eigenvalue problem relative to the backward error of an approximate eigenpair of the linearization. We develop bounds for the backward error involving both the norms of the polynomial coefficients and the properties of the Lagrange basis generated by the interpolation nodes. We also present several numerical examples to illustrate the numerical properties of the linearization and develop a balancing strategy to improve the accuracy of the computed solutions. ). Linearization. One of the most widespread solution methods for solving PEPs is to linearize [22] , which is to say they are transformed into larger generalized eigenvalue problems having the same eigenstructure. That is, the linearization has the same eigenvalues as the polynomial, and the eigenvectors of the polynomial are easily recovered from the eigenvectors of the linearization. Since the problem is now a linear generalized eigenvalue problem, one may use any of the well-established algorithms for computing the eigenvalues and eigenvectors of the linearization, for example, the QZ algorithm [36] . Certainly, linearization has proven to be an extremely convenient method to compute all of the roots of scalar polynomials, and many different linearizations have been proposed. Almost all of the linearizations proposed in the literature to date are constructed using the monomial basis coefficients of the polynomial [4, 13, 19, 34] , although there have been some notable exceptions for polynomials satisfying three term recurrence relations [5, 24] . Most of the aforementioned linearizations were developed for computing the roots of scalar polynomials only. However, almost all can be extended to matrix polynomials in a very simple way, for example, the extensions proposed by Amiraslani, Corless, and Lancaster [2], as well as the generalization of the Fiedler companion forms proposed by Antoniou and Vologiannidis [4] . Before introducing the particular linearizations for PEPs expressed in barycentric Lagrange form, we will first introduce the basic definitions of linearization and strong linearization relevant to the discussion. This is a restatement of the definition of linearization introduced by Gohberg, Kaashoek, and Lancaster [22] and later named strong linearization by Lancaster and Psarrakos [29] . Definition 2.1. Linearization of order mn [22] : A linear matrix pencil L(λ) = λB − A is said to be a linearization of P (λ) of order mn if L(λ) is of size mn × mn and the polynomials P (λ) and L(λ) are related in the following way:
doi:10.1137/140979034 fatcat:gncsr7n6ivgixfwgkpgljwyjse