Monte Carlo algorithms for hypothesis testing and for hidden markov models
Dong Ding, Gandy Axel, Imperial College London
2019
Monte Carlo methods are useful tools to approximate the numerical result of a problem by random sampling when its analytic solution is intractable or computationally intensive. The main focus of this work is to investigate Monte Carlo methods in two areas of inference problems: hypothesis testing and posterior analysis in a hidden Markov model (HMM). The first part of this thesis focuses on the decision of the p-value with respect to a fixed threshold via Monte Carlo simulations in a
more »
... hypothesis test. We wish to control the resampling risk, which is the probability of obtaining a different test decision from the true one based on the unknown p-value. We present confidence sequence method (CSM), a simple Monte Carlo testing procedure which bounds the resampling risk uniformly. CSM is useful due to its simple implementation and comparable performance to its competitors. The second part of the thesis focuses on two posterior distributions of an HMM: smoothing and parameter estimation. We apply a divide-and-conquer strategy (Lindsten et al., 2017) to develop Monte Carlo algorithms that which provide sample approximation of the target distribution. We propose an algorithm called tree-based particle smoothing algorithm (TPS) to estimate the joint smoothing distribution. We then assume an unknown parameter in the HMM, and extend TPS to approximate its posterior, which we refer to as tree-based parameter estimation algorithm (TPE). TPS and TPE both construct an auxiliary tree for recursively splitting model into sub-models. The root of the tree stands for the target distribution of the model. We propose different forms of intermediate target distributions of the sub-models associated to the non-root nodes, which are crucial to sampling quality. For the sampling process, we generate initial samples independently between the leaf nodes. Then we recursively merge these samples along the tree until reaching the root. Each merging step involves importance sampling for the (intermediate) target distribution. A more [...]
doi:10.25560/68618
fatcat:z3mcav46vfbwdjugee5y3vdxeu