A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is `application/pdf`

.

## Filters

##
###
Building Mean Field State Transition Models Using The Generalized Linear Chain Trick and Continuous Time Markov Chain Theory
[article]

2020
*
arXiv
*
pre-print

Intuitively, phase-type distributions are the absorption time distributions for continuous time

arXiv:2007.03902v1
fatcat:qnwdt2vikrcfrd2xatlpfwdrcy
*Markov**chains*(CTMCs). ... The well-known Linear*Chain*Trick (LCT) allows modelers to derive mean field ODEs that assume gamma (Erlang) distributed passage times, by*transitioning*individuals sequentially through a*chain*of sub-states ... an equivalent*Markov**chain*but where we only track*transitions*to new states. ...##
###
Asymptotic Properties of the Markov Chain Model method of finding Markov chains Generators of Empirical Transition Matrices in Credit Ratings Applications

2016
*
IOSR Journal of Mathematics
*

Conditions under which the Series method of finding

doi:10.9790/5728-1204035360
fatcat:zz2zom5gpba5lgcktxbpn24fmq
*Markov**chains**generators*of Empirical*Transition*Matrices in Credit Ratings applications are identified in this article. ...*I*find that*transitions*on municipal Bond ratings are described adequately for a period of up to five years. ... Asymptotic Properties of the*Markov**Chain*Model method of finding*Markov**chains**Generators*of.. ...##
###
Finding Generators for Markov Chains via Empirical Transition Matrices, with Applications to Credit Ratings

2001
*
Mathematical Finance
*

In this paper we identify conditions under which a true

doi:10.1111/1467-9965.00114
fatcat:be4tyqjtqffvhayv2nlqe7iqxm
*generator*does or does not exist for an empirically observed*Markov**transition*matrix. ... We also show how to obtain an approximate*generator*when a true*generator*does not exist. We give illustrations using credit rating*transition*matrices published by Moody's and by Standard and Poor's. ... Finding*Generators*for*Markov**Chains*via Empirical*Transition*Matrices, with Applications to Credit Ratings Abstract. ...##
###
Probabilistic Bisection Converges Almost as Quickly as Stochastic Approximation
[article]

2016
*
arXiv
*
pre-print

After implementing these changes, the new process is a

arXiv:1612.03964v1
fatcat:uaatdnrspnfspfhzsflhvmpq6a
*Markov**chain*on (−∞, ∞) that is pathwise dominated by Y , provided that the new process is initiated at time 0 below Y0. ... The event B(W, m) is Fm measurable, Wn|Fm)*I*B(W,m) ,implying that we may assume without loss of*generality*that Wm is Fm measurable. ...##
###
Rates in almost sure invariance principle for quickly mixing dynamical systems
[article]

2018
*
arXiv
*
pre-print

For a large class of

arXiv:1811.09094v1
fatcat:2lsnthmiffe23gipzjiugpin6u
*quickly*mixing dynamical systems, we prove that the error in the almost sure approximation with a Brownian motion is of order O((log n)^a) with a > 2. ... Recall also that following [7, Appendix A], we can and do assume without loss of*generality*that the*Markov**chain*(g n ) is aperiodic. ... The stationary measure of our*Markov**chain*we denote by ν. ...##
###
Quickly Generating Representative Samples from an RBM-Derived Process

2011
*
Neural Computation
*

We justify such approaches as ways to escape modes while approximately keeping the same asymptotic distribution of the

doi:10.1162/neco_a_00158
fatcat:wootxtm35fc7jk5vytfisvolv4
*Markov**chain*. ... Let us focus for now only on sampling procedures, and consider a*generic*MCMC with*transition*probability matrix A, i.e., the state s t at step t in the*chain*is obtained from the state s t−1 at step t ... c*i*x*i*+ ij W ij x*i*h j (1) where parameters are θ = (W, b, c). ...##
###
Being Optimistic to Be Conservative: Quickly Learning a CVaR Policy
[article]

2020
*
arXiv
*
pre-print

However, relatively little is known about how to explore to

arXiv:1911.01546v2
fatcat:qnca57j7rrew7ocbyuqrqseweu
*quickly*learn policies with good CVaR. ... In this paper, we present the first algorithm for sample-efficient learning of CVaR-optimal policies in*Markov*decision processes based on the optimism in the face of uncertainty principle. ... Conclusion We present a new algorithm for*quickly*learning CVaRoptimal policies in*Markov*decision processes. ...##
###
Being Optimistic to Be Conservative: Quickly Learning a CVaR Policy

2020
*
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
*

However, relatively little is known about how to explore to

doi:10.1609/aaai.v34i04.5870
fatcat:d6i6e4yubfhhnekax4bnofzupu
*quickly*learn policies with good CVaR. ... In this paper, we present the first algorithm for sample-efficient learning of CVaR-optimal policies in*Markov*decision processes based on the optimism in the face of uncertainty principle. ... We are interested in the CVaR of the discounted cumulative reward in a*Markov*Decision Process (MDP). ...##
###
How quickly can we sample a uniform domino tiling of the 2L x 2L square via Glauber dynamics?
[article]

2014
*
arXiv
*
pre-print

A conceptually simple (even if computationally not the most efficient) way of sampling uniformly one among so many tilings is to introduce a

arXiv:1210.5456v2
fatcat:xxrwzxxar5hcninhotlocvvb4q
*Markov**Chain*algorithm (Glauber dynamics) where, with rate 1 ... Our result applies to rather*general*domain shapes (not just the 2L× 2L square), provided that the typical height function associated to the tiling is macroscopically planar in the large L limit, under ... From the point of view of theoretical computer science [17, 16, 28, 20] , the interesting question is to understand how*quickly*, as a function of the size of G , the*Markov**chain*approaches equilibrium ...##
###
Theoretical Analysis of Simple Evolution Strategies in Quickly Changing Environments
[chapter]

2003
*
Lecture Notes in Computer Science
*

So far, all papers in the area have assumed that the environment changes only between

doi:10.1007/3-540-45105-6_66
fatcat:jwg5h6givrd7zbbyoxl45faoou
*generations*. In this paper, we take a first look at possibilities to handle a change during a*generation*. ... For that purpose, we derive an analytical model for a (1, 2) evolution strategy and show that sometimes it is better to ignore the environmental change until the end of the*generation*, than to evaluate ... Convergence Plots The expected fitness distribution of next generation's parent individual corresponds to a*transition*matrix of a*Markov**chain*. ...##
###
The action of a few permutations onr-tuples is quickly transitive

1998
*
Random structures & algorithms (Print)
*

. , n , there is a product of less than C log n of the s which map the first

doi:10.1002/(sici)1098-2418(199807)12:4<335::aid-rsa2>3.0.co;2-u
fatcat:ndyj6gvmzzempptrnolv3ew2rq
*i*r-tuple to the second. ... Although we came across this problem while studying a rather unrelated cryptographic problem, it belongs to a*general*context of which random Cayley graph quotients of S are good expanders. ᮊ ... It should be pointed out that by using Theorem 2.1 and classical Ž w x. results on*Markov**chains*see Section 2 of 24 imply that for ever fixed dG 2 and r G 1, random walks on graphs of G G U are rapidly ...##
###
Orbits for the Impatient: A Bayesian Rejection-sampling Method for Quickly Fitting the Orbits of Long-period Exoplanets

2017
*
Astronomical Journal
*

The family of Bayesian

doi:10.3847/1538-3881/aa6930
fatcat:7v4te35winacregk7axyg5z6gq
*Markov**Chain*Monte Carlo methods (MCMC) was introduced to the field of exoplanet orbit fitting by Ford (2004 Ford ( , 2006 and has been widely used (e.g., Nielsen et al. 2014; ...*chains*to converge. ...##
###
A counting renaissance: combining stochastic mapping and empirical Bayes to quickly detect amino acid sites under positive selection

2012
*
Computer applications in the biosciences : CABIOS
*

*chain*(10 6 − 10 7 ). ... We fit the above codon partition model in a Bayesian framework and use

*Markov*

*chain*Monte Carlo (MCMC) integration to obtain a sample from the posterior distribution of model parameters Pr(θ | y), where ...

##
###
Chaos-based random number generators-part I: analysis [cryptography]

2001
*
IEEE Transactions on Circuits and Systems I Fundamental Theory and Applications
*

Piecewise linearity of the map enables us to mathematically find parameter values for which a

doi:10.1109/81.915385
fatcat:y25zcolf3bfn5eoywl7zy72reu
*generating*partition is*Markov*and the RNG behaves as a*Markov*information source, and then to mathematically ... analyze the information*generation*process and the RNG. ... Is it a*Markov**chain*? If yes, then what is the order of the*Markov**chain*? And what is the dependence of the order of the*Markov**chain*on the parameters? ...##
###
Bayesian Spiking Neurons I: Inference

2008
*
Neural Computation
*

The corresponding

doi:10.1162/neco.2008.20.1.91
pmid:18045002
fatcat:dcxhid6jmrabfb3x3v4ftmr264
*generative*model is a network of a coupled hidden*Markov**chain*(see Figure 6A ). ... The corresponding model is a hidden*Markov**chain*(see Figure 1A ), which describes how the synaptic input was*generated*. This is the*generative*model of the sensory input, s t . ...
« Previous

*Showing results 1 — 15 out of 36,056 results*