Filters








36,056 Hits in 5.9 sec

Building Mean Field State Transition Models Using The Generalized Linear Chain Trick and Continuous Time Markov Chain Theory [article]

Paul J. Hurtado, Cameron Richards
2020 arXiv   pre-print
Intuitively, phase-type distributions are the absorption time distributions for continuous time Markov chains (CTMCs).  ...  The well-known Linear Chain Trick (LCT) allows modelers to derive mean field ODEs that assume gamma (Erlang) distributed passage times, by transitioning individuals sequentially through a chain of sub-states  ...  an equivalent Markov chain but where we only track transitions to new states.  ... 
arXiv:2007.03902v1 fatcat:qnwdt2vikrcfrd2xatlpfwdrcy

Asymptotic Properties of the Markov Chain Model method of finding Markov chains Generators of Empirical Transition Matrices in Credit Ratings Applications

Fred Nyamitago Monari, Dr. George Otieno Orwa, Dr. Joseph Kyalo Mung'atu
2016 IOSR Journal of Mathematics  
Conditions under which the Series method of finding Markov chains generators of Empirical Transition Matrices in Credit Ratings applications are identified in this article.  ...  I find that transitions on municipal Bond ratings are described adequately for a period of up to five years.  ...  Asymptotic Properties of the Markov Chain Model method of finding Markov chains Generators of..  ... 
doi:10.9790/5728-1204035360 fatcat:zz2zom5gpba5lgcktxbpn24fmq

Finding Generators for Markov Chains via Empirical Transition Matrices, with Applications to Credit Ratings

Robert B. Israel, Jeffrey S. Rosenthal, Jason Z. Wei
2001 Mathematical Finance  
In this paper we identify conditions under which a true generator does or does not exist for an empirically observed Markov transition matrix.  ...  We also show how to obtain an approximate generator when a true generator does not exist. We give illustrations using credit rating transition matrices published by Moody's and by Standard and Poor's.  ...  Finding Generators for Markov Chains via Empirical Transition Matrices, with Applications to Credit Ratings Abstract.  ... 
doi:10.1111/1467-9965.00114 fatcat:be4tyqjtqffvhayv2nlqe7iqxm

Probabilistic Bisection Converges Almost as Quickly as Stochastic Approximation [article]

Peter I. Frazier, Shane G. Henderson, Rolf Waeber
2016 arXiv   pre-print
After implementing these changes, the new process is a Markov chain on (−∞, ∞) that is pathwise dominated by Y , provided that the new process is initiated at time 0 below Y0.  ...  The event B(W, m) is Fm measurable, Wn|Fm)I B(W,m) ,implying that we may assume without loss of generality that Wm is Fm measurable.  ... 
arXiv:1612.03964v1 fatcat:uaatdnrspnfspfhzsflhvmpq6a

Rates in almost sure invariance principle for quickly mixing dynamical systems [article]

C Cuny, A Korepanov, Florence Merlevède
2018 arXiv   pre-print
For a large class of quickly mixing dynamical systems, we prove that the error in the almost sure approximation with a Brownian motion is of order O((log n)^a) with a > 2.  ...  Recall also that following [7, Appendix A], we can and do assume without loss of generality that the Markov chain (g n ) is aperiodic.  ...  The stationary measure of our Markov chain we denote by ν.  ... 
arXiv:1811.09094v1 fatcat:2lsnthmiffe23gipzjiugpin6u

Quickly Generating Representative Samples from an RBM-Derived Process

Olivier Breuleux, Yoshua Bengio, Pascal Vincent
2011 Neural Computation  
We justify such approaches as ways to escape modes while approximately keeping the same asymptotic distribution of the Markov chain.  ...  Let us focus for now only on sampling procedures, and consider a generic MCMC with transition probability matrix A, i.e., the state s t at step t in the chain is obtained from the state s t−1 at step t  ...  c i x i + ij W ij x i h j   (1) where parameters are θ = (W, b, c).  ... 
doi:10.1162/neco_a_00158 fatcat:wootxtm35fc7jk5vytfisvolv4

Being Optimistic to Be Conservative: Quickly Learning a CVaR Policy [article]

Ramtin Keramati, Christoph Dann, Alex Tamkin, Emma Brunskill
2020 arXiv   pre-print
However, relatively little is known about how to explore to quickly learn policies with good CVaR.  ...  In this paper, we present the first algorithm for sample-efficient learning of CVaR-optimal policies in Markov decision processes based on the optimism in the face of uncertainty principle.  ...  Conclusion We present a new algorithm for quickly learning CVaRoptimal policies in Markov decision processes.  ... 
arXiv:1911.01546v2 fatcat:qnca57j7rrew7ocbyuqrqseweu

Being Optimistic to Be Conservative: Quickly Learning a CVaR Policy

Ramtin Keramati, Christoph Dann, Alex Tamkin, Emma Brunskill
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
However, relatively little is known about how to explore to quickly learn policies with good CVaR.  ...  In this paper, we present the first algorithm for sample-efficient learning of CVaR-optimal policies in Markov decision processes based on the optimism in the face of uncertainty principle.  ...  We are interested in the CVaR of the discounted cumulative reward in a Markov Decision Process (MDP).  ... 
doi:10.1609/aaai.v34i04.5870 fatcat:d6i6e4yubfhhnekax4bnofzupu

How quickly can we sample a uniform domino tiling of the 2L x 2L square via Glauber dynamics? [article]

Benoit Laslier, Fabio Toninelli (Universite' de Lyon, CNRS)
2014 arXiv   pre-print
A conceptually simple (even if computationally not the most efficient) way of sampling uniformly one among so many tilings is to introduce a Markov Chain algorithm (Glauber dynamics) where, with rate 1  ...  Our result applies to rather general domain shapes (not just the 2L× 2L square), provided that the typical height function associated to the tiling is macroscopically planar in the large L limit, under  ...  From the point of view of theoretical computer science [17, 16, 28, 20] , the interesting question is to understand how quickly, as a function of the size of G , the Markov chain approaches equilibrium  ... 
arXiv:1210.5456v2 fatcat:xxrwzxxar5hcninhotlocvvb4q

Theoretical Analysis of Simple Evolution Strategies in Quickly Changing Environments [chapter]

Jürgen Branke, Wei Wang
2003 Lecture Notes in Computer Science  
So far, all papers in the area have assumed that the environment changes only between generations. In this paper, we take a first look at possibilities to handle a change during a generation.  ...  For that purpose, we derive an analytical model for a (1, 2) evolution strategy and show that sometimes it is better to ignore the environmental change until the end of the generation, than to evaluate  ...  Convergence Plots The expected fitness distribution of next generation's parent individual corresponds to a transition matrix of a Markov chain.  ... 
doi:10.1007/3-540-45105-6_66 fatcat:jwg5h6givrd7zbbyoxl45faoou

The action of a few permutations onr-tuples is quickly transitive

Joel Friedman, Antoine Joux, Yuval Roichman, Jacques Stern, Jean-Pierre Tillich
1998 Random structures & algorithms (Print)  
. , n , there is a product of less than C log n of the s which map the first i r-tuple to the second.  ...  Although we came across this problem while studying a rather unrelated cryptographic problem, it belongs to a general context of which random Cayley graph quotients of S are good expanders. ᮊ  ...  It should be pointed out that by using Theorem 2.1 and classical Ž w x. results on Markov chains see Section 2 of 24 imply that for ever fixed dG 2 and r G 1, random walks on graphs of G G U are rapidly  ... 
doi:10.1002/(sici)1098-2418(199807)12:4<335::aid-rsa2>3.0.co;2-u fatcat:ndyj6gvmzzempptrnolv3ew2rq

Orbits for the Impatient: A Bayesian Rejection-sampling Method for Quickly Fitting the Orbits of Long-period Exoplanets

Sarah Blunt, Eric L. Nielsen, Robert J. De Rosa, Quinn M. Konopacky, Dominic Ryan, Jason J. Wang, Laurent Pueyo, Julien Rameau, Christian Marois, Franck Marchis, Bruce Macintosh, James R. Graham (+2 others)
2017 Astronomical Journal  
The family of Bayesian Markov Chain Monte Carlo methods (MCMC) was introduced to the field of exoplanet orbit fitting by Ford (2004 Ford ( , 2006 and has been widely used (e.g., Nielsen et al. 2014;  ...  chains to converge.  ... 
doi:10.3847/1538-3881/aa6930 fatcat:7v4te35winacregk7axyg5z6gq

A counting renaissance: combining stochastic mapping and empirical Bayes to quickly detect amino acid sites under positive selection

Philippe Lemey, Vladimir N. Minin, Filip Bielejec, Sergei L. Kosakovsky Pond, Marc A. Suchard
2012 Computer applications in the biosciences : CABIOS  
chain (10 6 − 10 7 ).  ...  We fit the above codon partition model in a Bayesian framework and use Markov chain Monte Carlo (MCMC) integration to obtain a sample from the posterior distribution of model parameters Pr(θ | y), where  ... 
doi:10.1093/bioinformatics/bts580 pmid:23064000 pmcid:PMC3579240 fatcat:gfnaghclbrd5bjpkdtw4mfsex4

Chaos-based random number generators-part I: analysis [cryptography]

T. Stojanovski, L. Kocarev
2001 IEEE Transactions on Circuits and Systems I Fundamental Theory and Applications  
Piecewise linearity of the map enables us to mathematically find parameter values for which a generating partition is Markov and the RNG behaves as a Markov information source, and then to mathematically  ...  analyze the information generation process and the RNG.  ...  Is it a Markov chain? If yes, then what is the order of the Markov chain? And what is the dependence of the order of the Markov chain on the parameters?  ... 
doi:10.1109/81.915385 fatcat:y25zcolf3bfn5eoywl7zy72reu

Bayesian Spiking Neurons I: Inference

Sophie Deneve
2008 Neural Computation  
The corresponding generative model is a network of a coupled hidden Markov chain (see Figure 6A ).  ...  The corresponding model is a hidden Markov chain (see Figure 1A ), which describes how the synaptic input was generated. This is the generative model of the sensory input, s t .  ... 
doi:10.1162/neco.2008.20.1.91 pmid:18045002 fatcat:dcxhid6jmrabfb3x3v4ftmr264
« Previous Showing results 1 — 15 out of 36,056 results