### Regularization with randomized SVD for large-scale discrete inverse problems

Hua Xiang, Jun Zou
2013 Inverse Problems
In this paper we propose an algorithm for solving the large-scale discrete ill-conditioned linear problems arising from the discretization of linear or nonlinear inverse problems. The algorithm combines some existing regularization techniques and regularization parameter choice rules with a randomized singular value decomposition (SVD) so that only much smaller-scale systems are needed to solve, instead of the original large-scale regularized system. The algorithm can directly apply to some
more » ... ting regularization methods such as the Tikhonov and truncated SVD methods, with some popular regularization parameter choice rules such as the L-curve, GCV function, quasi-optimality and discrepancy principle. The error of the approximate regularized solution is analyzed and the efficiency of the method is well demonstrated by the numerical examples. where f i = σ 2 i /(σ 2 i + µ 2 ) is the Tikhonov filter factor [29] . When replacing the filter factors f i by 0's and 1's appropriately, we have the truncated SVD method (TSVD) [25] , which is another popular regularization method using the best low rank approximation of A. The TSVD regularized solution x k is given by where the positive integer k is the truncation parameter and is chosen such that the noise-dominated small singular values are discarded. As A arises mostly from the discretization of some compact operator, it has singular values of quite small magnitude. One can easily see from (3) that the solution can be easily contaminated by the perturbation in the measurement data b without the regularization, i.e., µ = 0. By introducing the regularization (µ = 0), we may make a comprise between the sensitivity of the problem and the perturbation of the measured data and greatly reduce the effect caused by the contamination of the noise in the data. A key issue for the success of the Tikhonov regularization is how to determine a reasonable regularization parameter µ. There are several popular techniques in the literature for the selection of effective regularization parameters. When the noise level is unknown, we may use some heuristic methods, for example, the so-called L-curve method [26, 31] , which uses the plot of (log ||Ax µ − b||, log ||x µ ||) over a range of µ, i.e., the norm of the regularized solution versus the corresponding residual norm. If there is a corner on the L-curve, one can take the corresponding parameter µ as the desired regularization parameter. Many other heuristic methods can be found in literature, such as the generalized cross-validation GCV) function [15], the quasi-optimality criterion [46, 47], Brezinski-Rodriguez-Seatzu estimators [4], Hanke-Raus rule [24], and so on. When the noise level is known, the discrepancy principle [39, 41], the monotone error rule [45], and the balancing principle [35, 38] may be applied. Once we have the knowledge of the SVD of matrix A, we can determine the regularization parameter by some of the aforementioned techniques, such as the L-curve, the GCV function, or the Brezinski-Rodriguez-Seatzu estimators. To see this, we can compute the regularized solution x µ and the corresponding residual r µ = b − Ax µ using (3) for a range of µ values. Then either the L-curve, the GCV function or the Brezinski-Rodriguez-Seatzu estimators can be easily applied to determine the desired parameter µ, and the corresponding solution x µ will be viewed as our final regularized solution. So we can see that the SVD is a simple and efficient tool for solving ill-posed discrete problems by Tikhonov regularization if we can afford the computing of the SVD for the corresponding coefficient matrix A. However, it is widely known that it may be infeasible or extremely expensive to compute SVD when the concerned discrete inverse problem is of large scale. In this work we shall investigate how to use the SVD to solve large-scale discrete inverse problems in a more feasible and efficient manner. Certainly we should not work on the original large systems directly, due to the computational complexity, instability and memory limitation. Our approach is to first greatly reduce the size of the original large-scale discrete system, then apply some existing regularizations combined with the SVD to solve the much reduced discrete system. Clearly the solution of the reduced system must be still a good approximation of the original large system in order to ensure this approach to work effectively. This is a challenging problem, and will be the central focus of this study. In the remainder of this work we will introduce some more efficient strategies to deal with the largescale discrete inverse problems, namely some randomized algorithms. These strategies are based on some new randomized algorithms that have been developed recently in the theoretical computer science community. They can greatly reduce the size of the original discrete inverse problem, but requiring to access the large matrix A only twice, which is very crucial when the matrix is of large size. Moreover, these algorithms work also well for the noisy data, due to their randomness.