Filters








2,820 Hits in 6.2 sec

The document as an ergodic markov chain

Eduard Hoenkamp, Dawei Song
2004 Proceedings of the 27th annual international conference on Research and development in information retrieval - SIGIR '04  
Viewing documents as language samples introduces the issue of defining a joint probability distribution over the terms. The present paper models a document as the result of a Markov process.  ...  We verified this in an experiment on query expansion demonstrating both the validity and the practicability of the method. This holds a promise for general language models.  ...  The document model as an ergodic process If the probability of a term depends only on the preceding term, then one can define the distribution as a Markov chain with the terms as states.  ... 
doi:10.1145/1008992.1009088 dblp:conf/sigir/HoenkampS04 fatcat:xuv4cmlv2vdp5dan4zeb5xwfh4

An Effective Approach to Verbose Queries Using a Limited Dependencies Language Model [chapter]

Eduard Hoenkamp, Peter Bruza, Dawei Song, Qiang Huang
2009 Lecture Notes in Computer Science  
The term co-occurrence statistics of queries and documents are each represented by a Markov chain.  ...  The paper proves that such a chain is ergodic, and therefore its asymptotic behavior is unique, stationary, and independent of the initial state.  ...  The surface constraints were represented by using an ergodic Markov chain.  ... 
doi:10.1007/978-3-642-04417-5_11 fatcat:7zosfyggkzf6dkyvptajerxzzu

On Rates of Convergence for Markov Chains under Random Time State Dependent Drift Criteria [article]

Ramiro Zurkowski, Serdar Yüksel, Tamás Linder
2015 arXiv   pre-print
We quantify how the rate of ergodicity, nature of Lyapunov functions, their drift properties, and the distributions of stopping times are related. We finally study an application in networked control.  ...  Motivated by such applications and extending previous work on Lyapunov-theoretic drift criteria, we establish both subgeometric and geometric rates of convergence for Markov chains under state dependent  ...  In addition, as documented extensively in the literature, Markov Chain Monte Carlo algorithms require a tedious analysis on rates of convergence bounds to obtain probabilistically guaranteed simulation  ... 
arXiv:1312.4210v2 fatcat:fpyose5rvzagvjgtwyzowh7d5q

A poisson formula for harmonic projections

V KAIMANOVICH, A FISHER
1998 Annales de l'I.H.P. Probabilites et statistiques  
Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques http://www.numdam.org/ 209  ...  As an illustration, let us deduce from (2) a measure-linear analogue of the Birkhoff ergodic theorem [3] .  ...  .)~ be the path space of the associated Markov chain on X, and P 8 be the Markov measure in the path space corresponding to an initial distribution 8 on X (we shall also use the notation Px if 03B8 = 8x  ... 
doi:10.1016/s0246-0203(98)80030-7 fatcat:n65nwsbezfe3rewpapxsusgumm

On Rates of Convergence for Markov Chains Under Random Time State Dependent Drift Criteria

Ramiro Zurkowski, Serdar Yuksel, Tamas Linder
2016 IEEE Transactions on Automatic Control  
We quantify how the rate of ergodicity, nature of Lyapunov functions, their drift properties, and the distributions of stopping times are related. We finally study an application in networked control.  ...  Motivated by such applications and extending previous work on Lyapunov-theoretic drift criteria, we establish both subgeometric and geometric rates of convergence for Markov chains under state dependent  ...  In addition, as documented extensively in the literature, Markov Chain Monte Carlo algorithms require a tedious analysis on rates of convergence bounds to obtain probabilistically guaranteed simulation  ... 
doi:10.1109/tac.2015.2447251 fatcat:b6u3hw3u4jarbg2dopy3u34p2y

Markov Chain Monte-Carlo Enhanced Variational Quantum Algorithms [article]

Taylor L. Patti, Omar Shehab, Khadijeh Najafi, Susanne F. Yelin
2022 arXiv   pre-print
In this work, we introduce a variational quantum algorithm that uses classical Markov chain Monte Carlo techniques to provably converge to global minima.  ...  These performance gaurantees are derived from the ergodicity of our algorithm's state space and enable us to place analytic bounds on its time-complexity.  ...  s internship at IBM Quantum, for which T.L.P. thanks Katie Pizzolato and the entire IBM Quantum team. S.F.Y. would like to acknowledge funding by NSF and AFOSR.  ... 
arXiv:2112.02190v2 fatcat:u5tyw67ilbhvjpjbengijh2umy

The Navigation Problem in the World-Wide-Web [chapter]

M. Levene
2002 Studies in Classification, Data Analysis, and Knowledge Organization  
be viewed as a finite ergodic Markov chain.  ...  A collection of trails is taken as input and an ergodic Markov chain is produced as output with the probabilities of transitions corresponding to the frequency the user traversed the associated links.  ...  In Subsection 3.3 we utilise our view of a hypertext database as an ergodic Markov chain by characterising typical user navigation sessions in terms of the entropy of the Markov chain.  ... 
doi:10.1007/978-3-642-55991-4_31 fatcat:kq2k26qycfaytg224tjhcb4eyq

Analysis of Light Utility Vehicle Readiness in Military Transportation Systems Using Markov and Semi-Markov Processes

Mateusz Oszczypała, Jarosław Ziółkowski, Jerzy Małachowski
2022 Energies  
As part of the considerations for the continuous time, verification of the distributions of time characteristics led to the development of a semi-Markov model.  ...  Operating states were distinguished relating to the implementation of the transport task, refueling, parking in the garage, as well as maintenance and repairs.  ...  Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/en15145062 fatcat:kqepbaenjvbi3psw7k7xgejovi

Personalized recommendation driven by information flow

Xiaodan Song, Belle L. Tseng, Ching-Yung Lin, Ming-Ting Sun
2006 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval - SIGIR '06  
In our experiments with an online document recommendation system, the results demonstrate that the EABIF and the TEABIF can respectively achieve an improved (precision, recall) of (91.0%, 87.1%) and (108.5%  ...  We propose that the information access behavior of a group of people can be modeled as an information flow issue, in which people intentionally or unintentionally influence and inspire each other, thus  ...  In this paper, we model the adoption graph as an ergodic Markov chain with primitive transition probability matrix as PageRank does [23, 25] to guarantee the convergence of the power of the matrix.  ... 
doi:10.1145/1148170.1148258 dblp:conf/sigir/SongTLS06 fatcat:g3debxcsybbobc3ziyhumdf2wi

Probabilistic pattern of risks in company's Quality management system

Vladimir Mager, Tatyana Leonova, Liudmila Chernenkaya, A.A. Radionov, A.V. Shmidt, I.A. Bayev, T.A. Khudyakova, A.V. Keller, Y.B. Kolbachev, A.V. Babkin, V.V. Savaley
2017 SHS Web of Conferences  
Representation of QMS processes as a graph with controlled discrete Markov chains is suggested, which allows to evaluate a probability of customer requirements non-fulfillment as a function of an intensity  ...  Aspects of the theory of dependability and Markov techniques are used, which are applied for evaluation of probability of failures in complicated technical systems.  ...  If a random sequence has Markov properties, it is named as Markov chain, and if states of a system are divided in time, we come to discrete Markov chain, in which a system can move from one state to the  ... 
doi:10.1051/shsconf/20173501050 fatcat:vn7j5ohhqjh3tle2u4y22abymu

An application of the theory of semi–Markov processes in simulation

Sonia Malefaki, George Iliopoulos
2007 Recent Advances in Stochastic Modeling and Data Analysis  
Under certain conditions, the associated jump process is an ergodic semi-Markov process with stationary distribution π.  ...  Working along the lines of the above approach, we are allowed to run more convenient Markov Chain Monte Carlo algorithms.  ...  In the case that the original sample sequence forms an ergodic Markov chain, the associated jump process is an ergodic semi-Markov process with stationary distribution π.  ... 
doi:10.1142/9789812709691_0026 fatcat:mmcvzjj7gfdxbjemnr6cusu7e4

Onomatology and content analysis of ergodic literature

Eugenia-Maria Kontopoulou, Maria Predari, Efstratios Gallopoulos
2013 Proceedings of the 3rd Narrative and Hypertext Workshop on - NHT '13  
We first establish a connection between the concept of ergodicity in mathematics and "ergodic literature" of the Chooseyour-own-Adventure (CYOA) type that serves to answer some existing objections regarding  ...  We then consider some steps towards the construction of concept maps for CYOA-type ergodic literature. Our analysis is based on modeling ergodic literature using digraphs and matrices.  ...  This statement is an ergodic theorem for Markov chains.  ... 
doi:10.1145/2462216.2462221 dblp:conf/ht/KontopoulouPG13 fatcat:53qk6udktrdbtifjz7imxmevsy

Block Gibbs Sampling for Bayesian Random Effects Models With Improper Priors: Convergence and Regeneration

Aixin Tan, James P. Hobert
2009 Journal of Computational And Graphical Statistics  
These standard errors can be used to choose an appropriate (Markov chain) Monte Carlo sample size.  ...  Another contribution of this paper is a result showing that, unless the data set is extremely small and unbalanced, the block Gibbs Markov chain is geometrically ergodic.  ...  Acknowledgments The authors thank the associate editor and two referees for helpful comments and suggestions. Hobert's research was supported by NSF grants DMS-0503648 and DMS-0805860.  ... 
doi:10.1198/jcgs.2009.08153 fatcat:ma6x5n5ej5bnbao5bhittfxu4q

Which ergodic averages have finite asymptotic variance?

George Deligiannidis, Anthony Lee
2018 The Annals of Applied Probability  
This allows us to characterize completely which ergodic averages have finite asymptotic variance when the Markov chain is an independence sampler.  ...  Full terms of use are available: We show that the class of L 2 functions for which ergodic averages of a reversible Markov chain have finite asymptotic variance is determined by the class of L 2 functions  ...  We are grateful to the referees for helpful comments that have improved the paper.  ... 
doi:10.1214/17-aap1358 fatcat:v4dgiu56avbb7i7h7crazbob6e

Principled Selection of Hyperparameters in the Latent Dirichlet Allocation Model

Clint P. George, Hani Doss
2017 Journal of machine learning research  
The method may be viewed as a computational scheme for implementation of an empirical Bayes analysis.  ...  We present a method, based on a combination of Markov chain Monte Carlo and importance sampling, for estimating the maximum likelihood estimate of the hyperparameters.  ...  This work is supported by the International Center for Automated Research at the UF Levin College of Law, NSF Grant DMS-11-06395, and NIH grant P30 AG028740.  ... 
dblp:journals/jmlr/GeorgeD17 fatcat:vju4xlhpkzct5e4qbcyalyye7q
« Previous Showing results 1 — 15 out of 2,820 results