Low-rank tensor methods for linear systems and eigenvalue problems [thesis]

Christine Tobler, Lars Grasedyck, Daniel Kreßner
2012
This thesis is concerned with methods for the approximate solution of high-dimensional linear systems and eigenvalue problems, using low-rank tensor techniques. In recent years, new low-rank tensor decompositions have been developed, which allow the black box approximation of a tensor at a given accuracy, and still have storage requirements that grow only linearly in the number of dimensions for a fixed rank. We have implemented a Matlab toolbox allowing the storage of a tensor in the
more » ... al Tucker decomposition (HTD), one of the aforementioned low-rank tensor decompositions. Additionally, basic operations such as the addition or inner product of two tensors in HTD are featured. Moreover, we introduce a new variant of approximation in HTD. A detailed description of all operations in the htucker toolbox is provided, as well as many examples. First of all, I want to thank Prof. Daniel Kressner for leaving me a lot of freedom in research, for his advice that I took and his advice that I did not take, and for his patience with my convoluted explanations. I am also grateful to my co-examinors, Prof. Christoph Schwab who carefully read my thesis and gave many helpful comments, and Prof. Lars Grasedyck who drove to Zurich in the middle of the semester for my defense, and whose papers were what started my interest in tensors. The lecture on tensor methods by Dr. Boris Khoromskij, who let me organise the exercises for this lecture, opened up new perspectives on tensor methods to me. The time I spent at SAM was exciting. It was a great experience to be a part of this group, and I thank everyone for this time. In particular, I want to thank Roman for the many evening tea breaks and for our collaboration and Cedric for the long discussions both about work and not. I also want to thank my (former) office mates, Ulrik for enduring my chatter in the office, Oleg for his enthusiasm and for reading through my most tedious proofs, Manuel for keeping work in perspective, and Martin for making me rethink my algorithms. I'm also grateful to Dan, Ulrik and Ingeborg, Oleg and Katja for many climbing weekends. I had the opportunity to supervise the Master theses of Alejandro and Manuel, and the semester projects of Mischa and Antoine. I have learned something new from each of them, and thank them for this. Finally, I want to thank my parents for their support over these years. My work on this thesis was funded by the Swiss National Science Foundation, specifically the SNF research module Preconditioned methods for large-scale model reduction within the SNF ProDoc Efficient Numerical Methods for Partial Differential Equations. vii Contents 7 Conclusions and Outlook 129 References 133 Curriculum Vitae 139 x where vec stacks the entries of a tensor into a long column vector, in reverse lexicographical order, and ⊗ denotes the standard Kronecker product. Then, instead of the n 1 · n 2 · · · n d entries of X , only the n 1 + n 2 + · · · + n d entries of u 1 , . . . , u d need to be stored. On the functional level, this corresponds to an approximation of f by a separable function. In Chapter 2, applications for the case of parametrized linear systems, high-dimensional parabolic problems and high-dimensional partial differential equation (PDE) eigenvalue problems are presented. In a typical application that we have in mind, X arises from the discretization of a high-dimensional or parameter-dependent partial differential equation and is only given implicitly as the solution to a typically huge linear system or eigenvalue problem. Consider a linear operator A mapping a tensor in R n 1 ×n 2 ×···×n d to a tensor in R m 1 ×m 2 ×···×m d . The aim is to compute a low-rank approximation of a tensor X defined implicitly through a linear system or eigenvalue problem A(X ) = B, where B has low rank, or A(X ) = λX , with X = 0. Typically, the linear operator A results from the discretization of a PDE on a tensor a particularly efficient HTD truncation Z of X ⊗ Y. Using (3.20), the extracted tensor J H Z represents an approximation of X Y satisfying the error bound Although the hierarchical ranks of J H Z are typically much smaller compared to X Y, the error bound above is far from being sharp. It is therefore recommended to truncate J H Z again after the extraction. z = x .* y elementwise product of X and Y. z = elem mult(x, y, opts) approximate elementwise product, with opts defined as in truncate.
doi:10.3929/ethz-a-007587832 fatcat:xzkxm6k7zfcbfh3kxbmhfnrtyi