1,499 Hits in 4.6 sec

Naive asymptotics for hitting time bounds in Markov chains

Vernon Rego
1992 Acta Informatica  
While obtaining expected passage limes is smelly a numerical procedure for general Markov chains, the results presented here outline a simple approach to bound expected passage times provided the chains  ...  A set of sufficient conilitions is obtained for Markov chains to yield upper and lower passage lime bounds.  ...  Uppuluri RelO of ORNL for their mosl helpfUl comments.  ... 
doi:10.1007/bf01185562 fatcat:vhmfcf34q5c5fevzgie5qmump4

Page 4470 of Mathematical Reviews Vol. , Issue 93h [page]

1993 Mathematical Reviews  
Dean Isaacson (1-IASU-S) 60 PROBABILITY THEORY AND STOCHASTIC PROCESSES 93h:60109 60510 68Q25 Rego, Vernon (1-PURD-C) Naive asymptotics for hitting time bounds in Markov chains.  ...  Summary: “A set of sufficient conditions is obtained for Markov chains to yield upper and lower passage time bounds.  ... 

Markov Chain Monte Carlo Algorithms: Theory and Practice [chapter]

Jeffrey S. Rosenthal
2009 Monte Carlo and Quasi-Monte Carlo Methods 2008  
We describe the importance and widespread use of Markov chain Monte Carlo (MCMC) algorithms, with an emphasis on the ways in which theoretical analysis can help with their practical implementation.  ...  In particular, we discuss how to achieve rigorous quantitative bounds on convergence to stationarity using the coupling method together with drift and minorisation conditions.  ...  At last count, the MCMC Preprint Service lists about seven thousand research papers, and the phrase "Markov chain Monte Carlo" elicits over three hundred thousand hits in Google.  ... 
doi:10.1007/978-3-642-04107-5_9 fatcat:royg2nehd5gvfnqhqryl6ig4a4

Importance sampling for Markov chains

Indira Kuruganti, Stephen G. Strickland
1996 Proceedings of the 28th conference on Winter simulation - WSC '96  
In this paper we describe several computational algorithms useful in studying importance sampling (IS) for Markov chains.  ...  We consider two classes of problems: hitting times and fixed-horizon costs.  ...  L: (x) = Ep [g (x)] M TF = E p ['L1i] = Ep [1i]Ep [M]. i=l HITTING TIMES Consider a continuous-time Markov chain defined on a finite state space 5.  ... 
doi:10.1145/256562.256624 fatcat:mckd7j5cwzdplgpv2depy2g5qq

Structured variational methods for distributed inference: Convergence analysis and performance-complexity tradeoff

Yanbing Zhang, Huaiyu Dai
2009 2009 IEEE International Symposium on Information Theory  
performance in given time).  ...  correspond to edge processes, typically involving non-reversible Markov chains.  ...  Then by the Chernoff bound, the probability that there is at least one time that the walk hits the boundary edges before expected time to hit a boundary node in a cluster and the coupling time in the  ... 
doi:10.1109/isit.2009.5205928 dblp:conf/isit/ZhangD09 fatcat:ua63ooshjvd7pbgimz7ad4ddi4

Page 7234 of Mathematical Reviews Vol. , Issue 98K [page]

1998 Mathematical Reviews  
time of a specific state in the case of a finite state ergodic Markov chain.  ...  Summary: “The Bayesian bootstrap for Markov chains is the Bayesian analogue of the bootstrap method for Markov chains.  ... 

Testing Symmetric Markov Chains from a Single Trajectory [article]

Constantinos Daskalakis, Nishanth Dikkala, Nick Gravin
2017 arXiv   pre-print
the hitting times of the model chain M0 is O(n) in the size of the state space n.  ...  We provide efficient testers and information-theoretic lower bounds for testing identity of symmetric Markov chains under our proposed measure of difference, which are tight up to logarithmic factors if  ...  Hitting Times and Mixing Times Two commonly studied random variables associated with Markov chains which are relevant to this paper are their mixing times and hitting times.  ... 
arXiv:1704.06850v2 fatcat:ka6snzgo4rf43nofn6ebik2wda

Discrete Hit-and-Run for Sampling Points from Arbitrary Distributions Over Subsets of Integer Hyperrectangles

Stephen Baumert, Archis Ghate, Seksan Kiatsupaibul, Yanfang Shen, Robert L. Smith, Zelda B. Zabinsky
2009 Operations Research  
To surmount this difficulty, we propose Discrete Hit-and-Run (DHR), a Markov chain motivated by the Hit-and-Run algorithm known to be the most efficient method for sampling from log-concave distributions  ...  In addition to this asymptotic analysis, we investigate finite-time behavior of DHR and present a variety of examples where DHR exhibits polynomial performance.  ...  by a membership oracle, and hence can be employed for approximate sampling from π over S, and (ii) compute finite-time performance bounds for this Markov chain sampler for some special cases of S and π  ... 
doi:10.1287/opre.1080.0600 fatcat:6ape5mvpube4na2yr7li2lohlm

Bootstrap uniform central limit theorems for Harris recurrent Markov chains [article]

Gabriela Ciołek
2016 arXiv   pre-print
The main objective of this paper is to establish bootstrap uniform functional central limit theorem for Harris recurrent Markov chains over uniformly bounded classes of functions.  ...  We show that in the atomic case the proof of the bootstrap uniform central limit theorem for Markov chains for functions dominated by a function in L^2 space proposed by Radulović (2004) can be significantly  ...  Let τ A = τ A (1) = inf{n ≥ 1 : X n ∈ A} be the first time when the chain hits the regeneration set A and τ A (j) = inf{n > τ A (j − 1), X n ∈ A} for j ≥ 2.  ... 
arXiv:1601.01470v1 fatcat:atqi6p7cevdlfmbhq2njlhkoa4

Markov Kernels Local Aggregation for Noise Vanishing Distribution Sampling [article]

Florian Maire, Pierre Vandekerkhove
2022 arXiv   pre-print
Some examples for which we show (theoretically or empirically) that a locally-weighted aggregation converges substantially faster and yields smaller asymptotic variances than an equivalent random-scan  ...  to a kernel which is relevant for the local topology of the target distribution.  ...  Explicit bounds on related asymptotic quantities such as the rate of convergence of the Markov chain (for example in total variation) or its asymptotic variance (for squared integrable functions) may be  ... 
arXiv:1806.09000v2 fatcat:jq5qxmfl2zfzvgdvlh47id3xqu

On The Memory Complexity of Uniformity Testing [article]

Tomer Berg, Or Ordentlich, Ofer Shayevitz
2022 arXiv   pre-print
Prior works in the field have almost exclusively used collision counting for upper bounds, and the Paninski mixture for lower bounds.  ...  Thus, different proof techniques are needed in order to attain our bounds.  ...  erg ) be the asymptotic probability of success in the new ergodic chain.  ... 
arXiv:2206.09395v1 fatcat:oc6rklbn4rfsbhfox2gerai3em

Multi-scale metastable dynamics and the asymptotic stationary distribution of perturbed Markov chains [article]

Volker Betz, Stéphane Le Roux
2014 arXiv   pre-print
We consider a simple but important class of metastable discrete time Markov chains, which we call perturbed Markov chains.  ...  Closed probabilistic expressions are given for the asymptotic transition probabilities of these chains, but we also show how to compute them in a fast and numerically stable way.  ...  For a Markov chain X on a state space S, the hitting time of a set A ⊂ S is denoted by τ A (X) = inf{n 0 : X n ∈ A}, and the return time by τ + A (X) = inf{n > 0 : X n ∈ A}.  ... 
arXiv:1412.6979v1 fatcat:winkuxckgzdkhh4to6amahgmve

Rates of convergence for lamplighter processes

Olle Häggstr^:om, Johan Jonasson
1997 Stochastic Processes and their Applications  
Consider a graph, G, for which the vertices can have two modes, 0 or 1. Suppose that a particle moves around on G according to a discrete time Markov chain with the following rules.  ...  In the latter case we show that the convergence rate is asymptotically determined by the cover time CN in that the total variation norm after aN2 steps is given by P(CN > UN').  ...  Acknowledgements We thank the referee for valuable comments and corrections. 0. Hiiggstrijm, J. JonassonlStochastic Processes and their Applications 67 (1997) 227-249  ... 
doi:10.1016/s0304-4149(97)00007-0 fatcat:y4ekchgjorer3kltcothc64ihm

Adaptive Importance Sampling Technique for Markov Chains Using Stochastic Approximation

T. P. I. Ahamed, V. S. Borkar, S. Juneja
2006 Operations Research  
For a discrete-time finite-state Markov chain, we develop an adaptive importance sampling scheme to estimate the expected total cost before hitting a set of terminal states.  ...  The updates are shown to concentrate asymptotically in a neighborhood of the desired zero variance estimator.  ...  Acknowledgements: The authors would like to thank the area editor, the associate editor and the referees for their inputs that led to considerable improvements in the paper.  ... 
doi:10.1287/opre.1060.0291 fatcat:7tkmixibtfesvlmuxdtifutem4

Stochastic Optimization on Continuous Domains With Finite-Time Guarantees by Markov Chain Monte Carlo Methods

Andrea Lecchini-Visintini, John Lygeros, Jan M. Maciejowski
2010 IEEE Transactions on Automatic Control  
We introduce bounds on the finite-time performance of Markov chain Monte Carlo algorithms in approaching the global solution of stochastic optimization problems over continuous domains.  ...  A comparison with other state-of-the-art methods having finite-time guarantees for solving stochastic programming problems is included.  ...  Weak convergence to 1 implies that, asymptotically, k hits the set of approximate value optimizers 2 3 (), for any > 0, with probability one [21] - [25] .  ... 
doi:10.1109/tac.2010.2078170 fatcat:e4kxu2v2dbdzjbvmrdeso4d6f4
« Previous Showing results 1 — 15 out of 1,499 results