Implicit Sampling, with Application to Data Assimilation [chapter]

Alexandre J. Chorin, Matthias Morzfeld, Xuemin Tu
2014 Partial Differential Equations: Theory, Control and Approximation  
There are many computational tasks in which it is necessary to sample a given probability density (pdf), i.e., use a computer to construct a sequence of independent random vectors x i , i = 1, 2, . . . , whose histogram converges to the given pdf. This can be difficult because the sample space can be huge, and more important, because the portion of the space where the density is significant can be very small, so that one may miss it by an ill-designed sampling scheme. Indeed, Markov-chain Monte
more » ... Markov-chain Monte Carlo, the most widely used sampling scheme, can be thought of as a search algorithm, where one starts at an arbitrary point and one advances step-by-step towards the high probability region of the space. This can be expensive, in particular because one is typically interested in independent samples, while the chain has a memory. We present an alternative, in which samples are found by solving an algebraic equation with a random right hand side rather than by following a chain; each sample is independent of the previous samples. We explain the construction in the context of numerical integration, and then apply it to data assimilation.
doi:10.1007/978-3-642-41401-5_6 fatcat:b5wtscblfjaebpryde4emaaesq