Polynomial Chaos Expansion of Random Coefficients and the Solution of Stochastic Partial Differential Equations in the Tensor Train Format

Sergey Dolgov, Boris N. Khoromskij, Alexander Litvinenko, Hermann G. Matthies
2015 SIAM/ASA Journal on Uncertainty Quantification  
We apply the tensor train (TT) decomposition to construct the tensor product polynomial chaos expansion (PCE) of a random field, to solve the stochastic elliptic diffusion PDE with the stochastic Galerkin discretization, and to compute some quantities of interest (mean, variance, and exceedance probabilities). We assume that the random diffusion coefficient is given as a smooth transformation of a Gaussian random field. In this case, the PCE is delivered by a complicated formula, which lacks an
more » ... analytic TT representation. To construct its TT approximation numerically, we develop the new block TT cross algorithm, a method that computes the whole TT decomposition from a few evaluations of the PCE formula. The new method is conceptually similar to the adaptive cross approximation in the TT format but is more efficient when several tensors must be stored in the same TT representation, which is the case for the PCE. In addition, we demonstrate how to assemble the stochastic Galerkin matrix and to compute the solution of the elliptic equation and its postprocessing, staying in the TT format. We compare our technique with the traditional sparse polynomial chaos and the Monte Carlo approaches. In the tensor product polynomial chaos, the polynomial degree is bounded for each random variable independently. This provides higher accuracy than the sparse polynomial set or the Monte Carlo method, but the cardinality of the tensor product set grows exponentially with the number of random variables. However, when the PCE coefficients are implicitly approximated in the TT format, the computations with the full tensor product polynomial set become possible. In the numerical experiments, we confirm that the new methodology is competitive in a wide range of parameters, especially where high accuracy and high polynomial degrees are required. To some extent, these methods have already been applied to parametric problems. Nonintrusive (black box) tensor methods for multiparametric problems, i.e., "class 2," were developed in [4, 5, 15] . In particular, in [4] the authors follow the stochastic collocation approach and compute functionals of the solution of multiparametric PDEs. Since the stochastic collocation allows solving uncoupled deterministic problems for different collocation points, the functional of the solution (e.g., the average value) can be approximated straightforwardly via the black box hierarchical tensor interpolation algorithm. To compute the whole stochastic solution is a more difficult problem, especially in the stochastic Galerkin framework, where deterministic problems are coupled. In [36, 37, 46, 61, 70] the authors develop iterative methods and preconditioners to solve numerically discretized multiparametric problems. Several manipulations of the PCE with a low-rank approximation have been considered. In [19] the authors assume that the solution has a low-rank canonical polyadic (CP) tensor format and develop methods for the CP-formatted computation of level sets. In [45, 18] the authors analyzed tensor ranks of the stochastic operator. The proper generalized decomposition was applied for solving high-dimensional stochastic problems in [50, 51] . In [33, 34, 35] the authors employed newer tensor formats, the tensor train (TT) and quantized TT (QTT), for the approximation of coefficients and the solution of stochastic elliptic PDEs. The theoretical study of the complexity of the stochastic equation was provided, for example, by means of the analytic regularity and (generalized) polynomial chaos (PC) approximation [68] for control problems constrained by linear parametric elliptic and parabolic PDEs [38] . Other classical techniques to cope with high-dimensional problems are sparse grids [28, 10, 49] and (quasi) Monte Carlo methods [26, 62, 39] . Nevertheless, tensor product methods are more flexible than sparse grids, as they avoid severe reductions of the model from the very beginning and adapt a suitable structure on the discrete level. Compared to Monte Carlo methods, tensor techniques work implicitly with the whole solution, and even a construction of a tensor format for entrywise given data in a black box manner uses less randomness than the Monte Carlo approach. In this article we approximate the PCE of the input coefficient κ(x, ω) in the TT format. After that we compute the solution u(x, ω) and perform all postprocessing in the same TT format. The first stage, computation of the PCE of κ, involves a lengthy formula, defining each entry of the discretized coefficient. To perform this computation efficiently, we develop a new block cross approximation algorithm, which constructs the TT format for κ from a few evaluations of the entrywise formula. This formula delivers several tensors that are to be summed and approximated in a TT format. We show that the new algorithm is more efficient than several runs of a previously existing cross method [58] for each tensor separately. As soon as the coefficient is given in the TT format, it becomes very easy to construct the stiffness matrix, derived from the stochastic Galerkin discretization of (1.1). We apply the alternating iterative tensor algorithm to solve a large linear system arising from (1.1) and finally use the cross algorithm again to compute the exceedance probability from the solution. In the next section, we outline the general Galerkin, polynomial chaos expansion (PCE), and Karhunen-Loève expansion (KLE) discretization schemes for a random field. An introduction to the TT methods and the new block cross interpolation algorithm are presented in section 3. Some details of how to apply the block cross algorithm to the PCE calculations Downloaded 12/10/15 to 193.175.53.21. Redistribution subject to SIAM license or copyright; see
doi:10.1137/140972536 fatcat:kqguoefb6rdghmlu63oboiqt3u