Non-Gaussian statistical inverse problems. Part II: Posterior convergence for approximated unknowns

Sari Lasanen
2012 Inverse Problems and Imaging  
The statistical inverse problem of estimating the probability distribution of an infinite-dimensional unknown given its noisy indirect observation is studied in the Bayesian framework. In practice, one often considers only finite-dimensional unknowns and investigates numerically their probabilities. As many unknowns are function-valued, it is of interest to know whether the estimated probabilities converge when the finite-dimensional approximations of the unknown are refined. In this work, the
more » ... eneralized Bayes formula is shown to be a powerful tool in the convergence studies. With the help of the generalized Bayes formula, the question of convergence of the posterior distributions is returned to the convergence of the finite-dimensional (or any other) approximations of the unknown. The approach allows many prior distributions while the restrictions are mainly for the noise model and the direct theory. Three modes of convergence of posterior distributions are considered -- weak convergence, setwise convergence and convergence in variation. The convergence of conditional mean estimates is studied. Several examples of applicable infinite-dimensional non-Gaussian noise models are provided, including a generalization of the Cameron-Martin formula for certain non-Gaussian measures. Also, the well-posedness of Bayesian statistical inverse problems is studied.
doi:10.3934/ipi.2012.6.267 fatcat:cn6t55isqjboplssd3yiaapbci