Solving parametric PDEs with neural networks: unfavorable structure vs. expressive power [article]

Mones Konstantin Raslan, Technische Universität Berlin
This cumulative dissertation extends the theory of neural networks (NNs). In the first part of this thesis, [PRV20] in Appendix A, we provide a general analysis of the hypothesis class of NNs from a structural point of view. Here, we examine the algebraic and topological properties of the set of NNs with fixed architecture. We establish that this set is never convex, hardly ever closed in classical function spaces and that the parametrization of NNs is not inverse stable. These observations
more » ... d, in practice, lead to highly undesirable phenomena such as diverging weights or slow convergence of the underlying training algorithm. The second part of this thesis deals with the concrete application of solving parametric partial differential equations (PDEs) by NNs. In typical modeling tasks, it is required to solve some PDE for different characterizing parameters such as the shape of the domain, the boundary conditions, or the right-hand side. In this context, the development of algorithms that are able to efficiently and accurately compute the solution for a new input is imperative. A large variety of reduced order models, taking into account the low-dimensionality of the solution set, have been developed in the past. Moving away from model-based techniques and motivated by their success in applications, in this thesis we focus on a data-driven approach based on NNs for the solution of parametric PDEs. A factor in favor of their use is their ability to calculate a new solution with little computational effort after training, when compared to the cost of the training phase. The focus of this part of the thesis lies on an examination of the expressive power of NNs for solutions of parametric PDEs. We first derive in [GR21] (see Appendix B) almost optimal approximation rates for smooth functions by NNs with encodable weights, measured with respect to Sobolev norms. These results continue a long avenue of research and provide a consolidating proof strategy for deriving expressivity results based on the regularity of the [...]
doi:10.14279/depositonce-11332 fatcat:2hlsf5j5fnepzk4xyd6azjpg2i