The Internet Archive has digitized a microfilm copy of this work. It may be possible to borrow a copy for reading.
Filters
Page 652 of Mathematical Reviews Vol. 58, Issue 1
[page]
1979
Mathematical Reviews
For the construction of these approximations he uses the idea that the Bayesian prob- lem can be transformed to a standard Markov decision problem by incorporating the posterior distribution in the state ...
They prove that contracting Markov decision processes, in the sense of J. A. van Nunen [Contracting Markov decision processes, Math. Centr., Amsterdam 1976; MR 58 #20474], fit into this framework. ...
Air Traffic Forecast Empirical Research Based on the MCMC Method
2012
Computer and Information Science
In this paper, the Markov Chain Monte Carlo (MCMC) method of applied theory of statistics has been introduced into the aviation sector, and the discussion on airport air traffic forecast has been conducted ...
Airport air traffic is one of the most important and hardest one among all the airport data forecasts. ...
The basic idea of MCMC method is to construct a Markov Chain, making its stationary distribution the posterior distribution of the parameters to be estimated, thus generating the samples of posterior distribution ...
doi:10.5539/cis.v5n5p50
fatcat:agt3hvr2svas3fcufcche75trm
Optimal Camera Parameter Selection for State Estimation with Applications in Object Recognition
[chapter]
2001
Lecture Notes in Computer Science
The results show that the sequential decision process outperforms a random strategy, both in the sense of recognition rate and number of views necessary to return a decision. ...
The convergence of the decision process can be proven. We demonstrate the benefits of our approach using an active object recognition scenario. ...
The key point of the convergence proof is that a irreducible Markov chain can be defined representing the sequential decision process [4] . Two corrolaries give us the proof of convergence. ...
doi:10.1007/3-540-45404-7_41
fatcat:v3jjled6ibebzj4bdwwfk6rtnu
Convergence Monitoring of Markov Chains Generated for Inverse Tracking of Unknown Model Parameters in Atmospheric Dispersion
2011
Progress in Nuclear Science and Technology
These two diagnostics have been applied for the posterior quantities of the release point and the release rate inferred through the inverse tracking of unknown model parameters for the Yonggwang atmospheric ...
From these two convergence diagnostics, the validation of Markov chains generated have been ensured and PSRF then is especially suggested as the efficient tool for convergence monitoring for the source ...
Acknowledgment This work was supported by Korean Ministry of Knowledge Economy (2008-P-EP-HM-E-06-0000) and Sunkwang Atomic Energy Safety Co., Ltd.. ...
doi:10.15669/pnst.1.464
fatcat:7mbib4m3nbdmti6lni6clvxxze
Implementing random scan Gibbs samplers
2005
Computational statistics (Zeitschrift)
The Gibbs sampler, being a popular routine amongst Markov chain Monte Carlo sampling methodologies, has revolutionized the application of Monte Carlo methods in statistical computing practice. ...
The decision rules through which this strategy is chosen are based on convergence properties of the induced chain and precision of statistical inferences drawn from the generated Monte Carlo samples. ...
We also study the effect of decision criteria, be it convergence rate or variance, on the choice of visitation strategy. ...
doi:10.1007/bf02736129
fatcat:eqs7jjeqljefdkisjukysegfre
Dynamic Tempered Transitions for Exploring Multimodal Posterior Distributions
2004
Political Analysis
Multimodal, high-dimension posterior distributions are well known to cause mixing problems for standard Markov chain Monte Carlo (MCMC) procedures; unfortunately such functional forms readily occur in ...
in response to current posterior features. ...
A Markov chain has converged to its limiting distribution (the posterior of interest for properly setup MCMC applications) when it generates only legitimate values from this distribution in proportion ...
doi:10.1093/pan/mph027
fatcat:kabu7eckxzeq3g3pftrmrkd7c4
Is Partial-Dimension Convergence a Problem for Inferences from MCMC Algorithms?
2008
Political Analysis
The usual culprit is slow mixing of the Markov chain and therefore slow convergence towards the target distribution. ...
Although practitioners are generally aware of the importance of convergence of the Markov chain, many are not fully aware of the difficulties in fully assessing convergence across multiple dimensions. ...
A Markov chain has converged at time t to its invariant distribution (the posterior distribution of interest for correctly setup Bayesian applications) when the transition kernel produces draws arbitrarily ...
doi:10.1093/pan/mpm019
fatcat:75u6fnbhxjblbagvszri7mwypa
Convergence analyses and comparisons of Markov chain Monte Carlo algorithms in digital communications
2002
IEEE Transactions on Signal Processing
nor do they explicitly estimate the channel by employing training signals or decision-feedback; and c) they are well suited for iterative (turbo) processing in coded systems. ...
Recently, Markov chain Monte Carlo (MCMC) methods have been applied to the design of blind Bayesian receivers in a number of digital communications applications. ...
CONVERGENCE OF MCMC SAMPLERS In all MCMC algorithms, a Markov transition rule (or kernel) is first constructed so that its limiting distribution is the desired posterior distribution. ...
doi:10.1109/78.978381
fatcat:nkgpgxiz7rgbtkjyzrijxrcfhy
A simple introduction to Markov Chain Monte–Carlo sampling
2016
Psychonomic Bulletin & Review
Markov Chain Monte-Carlo (MCMC) is an increasingly popular method for obtaining information about distributions, especially for estimating posterior distributions in Bayesian inference. ...
Keywords Markov Chain Monte-Carlo · MCMC · Bayesian inference · Tutorial Over the course of the twenty-first century, the use of Markov chain Monte-Carlo sampling, or MCMC, has grown dramatically. ...
The process of ignoring the initial part of the Markov chain is discussed in more detail later in this section. ...
doi:10.3758/s13423-016-1015-8
pmid:26968853
pmcid:PMC5862921
fatcat:3oqyp5bphvfcxocsvdqmf6jqsu
Bridge Deterioration Modeling by Markov Chain Monte Carlo (MCMC) Simulation Method
[chapter]
2014
Lecture Notes in Mechanical Engineering
Results show that TPMs corresponding to critical bridge elements can be obtained by Metropolis-Hasting Algorithm (MHA) coded in MATLAB program until it converges to stationary transition probability distributions ...
Results show that TPMs corresponding to critical bridge elements can be obtained by Metropolis-Hasting Algorithm (MHA) coded in MATLAB program until it converges to stationary transition probability distributions ...
P(θ /Y) is known as the posterior distribution or target distribution and P(θ) is called prior distribution of unknown model parameter. ...
doi:10.1007/978-3-319-09507-3_47
fatcat:v2bvmdgdabam3kcdz4ry3x4o4y
Evidence accumulation models with R: A practical guide to hierarchical Bayesian methods
2020
The Quantitative Methods for Psychology
Reuse This article is distributed under the terms of the Creative Commons Attribution (CC BY) licence. ...
We illustrate its basic use and an example of fitting complex hierarchical Wiener diffusion models to four shooting-decision data sets. ...
The right panel shows the Markov chains are well-mixed. The lower panel in Figure 4 shows the posterior distributions of each parameter. ...
doi:10.20982/tqmp.16.2.p133
fatcat:6vuhbrq44raijhd3qrgq3gyzgu
Inferring the Optimal Policy using Markov Chain Monte Carlo
[article]
2019
arXiv
pre-print
In order to resolve these problems, we propose a technique using Markov Chain Monte Carlo to generate samples from the posterior distribution of the parameters conditioned on being optimal. ...
This paper investigates methods for estimating the optimal stochastic control policy for a Markov Decision Process with unknown transition dynamics and an unknown reward function. ...
In Neal's formulation, the posterior distributions of the parameters θ given the dataset of inputs and labels D is approximated using Markov Chain Monte Carlo [7] . ...
arXiv:1912.02714v1
fatcat:mflokdp2kjda7mvrlpdc7z4sxi
Theory and Dynamics of Perceptual Bistability
[chapter]
2007
Advances in Neural Information Processing Systems 19
We formalize the theory, explicitly derive switching rate distributions and discuss qualitative properties of the theory including the effect of changes in the posterior distribution on switching rates ...
In particular, we propose that the brain explores a posterior distribution over image interpretations at a rapid time scale via a sampling-like process and updates its interpretation when a sampled interpretation ...
Extrema of the posterior sampling process If the sampling process has no long-range temporal dependence, then under mild assumptions the distribution of extrema converge in distribution 5 to one of three ...
doi:10.7551/mitpress/7503.003.0157
fatcat:ceuproo4lvaxbkst2ob4elqyme
Page 7283 of Mathematical Reviews Vol. , Issue 2000j
[page]
2000
Mathematical Reviews
They show that the optimal rate of convergence to a normal distribution for # is not obtained unless g is undersmoothed. ...
For each mode, the optimal rate of convergence (o.r.c.) for i.i.d. sequences is known. ...
Iterative simulation methods
1990
Journal of Computational and Applied Mathematics
The standard methods of generating sample from univariate distributions often become hopelessly inefficient when applied to realizations of stochastic processes . ...
The methods are surveyed and compared, with particular reference to their convergence properties . ...
Both the prior distribution and the model of the observation process are arranged carefully so that the posterior is a Markov random field on a sparsely connected graph . ...
doi:10.1016/0377-0427(90)90347-3
fatcat:wd6foqhdwnepfpuuv24rkkzjiu
« Previous
Showing results 1 — 15 out of 20,115 results