Analysis of a Classical Matrix Preconditioning Algorithm [article]

Leonard J. Schulman, Alistair Sinclair
2015 arXiv   pre-print
We study a classical iterative algorithm for balancing matrices in the L_∞ norm via a scaling transformation. This algorithm, which goes back to Osborne and Parlett & Reinsch in the 1960s, is implemented as a standard preconditioner in many numerical linear algebra packages. Surprisingly, despite its widespread use over several decades, no bounds were known on its rate of convergence. In this paper we prove that, for any irreducible n× n (real or complex) input matrix A, a natural variant of
more » ... algorithm converges in O(n^3(nρ/ε)) elementary balancing operations, where ρ measures the initial imbalance of A and ε is the target imbalance of the output matrix. (The imbalance of A is _i |(a_i^out/a_i^in)|, where a_i^out,a_i^in are the maximum entries in magnitude in the ith row and column respectively.) This bound is tight up to the n factor. A balancing operation scales the ith row and column so that their maximum entries are equal, and requires O(m/n) arithmetic operations on average, where m is the number of non-zero elements in A. Thus the running time of the iterative algorithm is Õ(n^2m). This is the first time bound of any kind on any variant of the Osborne-Parlett-Reinsch algorithm. We also prove a conjecture of Chen that characterizes those matrices for which the limit of the balancing process is independent of the order in which balancing operations are performed.
arXiv:1504.03026v2 fatcat:rhdcdtyn5veaxpp4ob3bjbraci