Non intrusive reduced order modeling of parametrized PDEs by kernel POD and neural networks
We propose a nonlinear reduced basis method for the efficient approximation of parametrized partial differential equations (PDEs), exploiting kernel proper orthogonal decomposition (KPOD) for the generation of a reduced-order space and neural networks for the evaluation of the reduced-order approximation. In particular, we use KPOD in place of the more classical POD, on a set of high-fidelity solutions of the problem at hand to extract a reduced basis. This method provides a more accurate
... imation of the snapshots' set featuring a lower dimension, while maintaining the same efficiency as POD. A neural network (NN) is then used to find the coefficients of the reduced basis by following a supervised learning paradigm and shown to be effective in learning the map between the time/parameter values and the projection of the high-fidelity snapshots onto the reduced space. In this NN, both the number of hidden layers and the number of neurons vary according to the intrinsic dimension of the differential problem at hand and the size of the reduced space. This adaptively built NN attains good performances in both the learning and the testing phases. Our approach is then tested on two benchmark problems, a one-dimensional wave equation and a two-dimensional nonlinear lid-driven cavity problem. We finally compare the proposed KPOD-NN technique with a POD-NN strategy, showing that KPOD allows a reduction of the number of modes that must be retained to reach a given accuracy in the reduced basis approximation. For this reason, the NN built to find the coefficients of the KPOD expansion is smaller, easier and less computationally demanding to train than the one used in the POD-NN strategy.