Filters








3,492 Hits in 2.6 sec

On the Learnability of Hidden Markov Models [chapter]

Sebastiaan A. Terwijn
2002 Lecture Notes in Computer Science  
Rather than considering all probability distributions, or even just certain specific ones, the learning of a hidden Markov model takes place under a distribution induced by the model itself.  ...  A simple result is presented that links the learning of hidden Markov models to results in complexity theory about nonlearnability of finite automata under certain cryptographic assumptions.  ...  We thank Peter Clote for introducing us to hidden Markov models and for helpful discussions.  ... 
doi:10.1007/3-540-45790-9_21 fatcat:7wt7t4gsczc37dxqchsuk6jtz4

THE DESIGN AND TESTING OF A FIRST-ORDER LOGIC-BASED STOCHASTIC MODELING LANGUAGE

DANIEL J. PLESS, CHAYAN CHAKRABARTI, ROSHAN RAMMOHAN, GEORGE F. LUGER
2006 International journal on artificial intelligence tools  
A first-order language is able to reason about potentially infinite classes and situations such as hidden Markov models(HMMs).  ...  Since the inference scheme for this language is based on a variant of Pearl's loopy belief propagation algorithm, we call it Loopy Logic.  ...  [19] consider model induction in the context of more traditional Bayesian Belief Networks and Angelopoulos and Cussens [20] and Cussens [21] in the area of Constraint Logic Programming.  ... 
doi:10.1142/s0218213006003077 fatcat:lrlp4o657fdyxcg3k6fj6nwhrq

Page 5274 of Mathematical Reviews Vol. , Issue 98H [page]

1998 Mathematical Reviews  
For one-hidden-layer neural networks, covering numbers of the class of functions computed by a single hidden node bound from above the covering numbers of the convex core.  ...  Summary: “We study the learnability of monotone term decision lists in the exact model of equivalence and membership queries.  ... 

An Information-Theoretic View for Deep Learning [article]

Jingwei Zhang, Tongliang Liu, Dacheng Tao
2018 arXiv   pre-print
Specifically, letting L be the number of convolutional and pooling layers in a deep neural network, and n be the size of the training sample, we derive an upper bound on the expected generalization error  ...  This suggests that the claim 'the deeper the better' is conditioned on a small training error or E[R_S(W)].  ...  We now have a Markov model for DNNs, as shown in Figure 2 . From the Markov property, we know that if U → V → W forms a Markov chain, then W is conditionally independent of U given V .  ... 
arXiv:1804.09060v8 fatcat:iamh2a3hbjde3amwbfzqhos4mq

Page 529 of Computational Linguistics Vol. 22, Issue 4 [page]

1996 Computational Linguistics  
Best-first model merging for hidden Markov model induction. Technical Report TR-94-003, ICSI, Berkeley, CA, January. Tesar, Bruce. 1995. Computational Optimality Theory.  ...  Hidden Markov model induction by Bayesian model merging. In Advances in Neural Information Processing Systems 5. Morgan Kaufman, San Mateo, CA. Stolcke, Andreas and Stephen Omohundro. 1994.  ... 

Some Classes of Regular Languages Identifiable in the Limit from Positive Data [chapter]

François Denis, Aurélien Lemay, Alain Terlutte
2002 Lecture Notes in Computer Science  
Attribute Grammars with Structured Data for Natural Language Processing p. 237 A PAC Learnability of Simple Deterministic Languages p. 249 On the Learnability of Hidden Markov Models p. 261 Shallow  ...  Used with permission. p. 28 Beyond EDSM p. 37 Consistent Identification in the Limit of Rigid Grammars from Strings Is NP-hard p. 49 Some Classes of Regular Languages Identifiable in the Limit  ... 
doi:10.1007/3-540-45790-9_6 fatcat:nmlknwqoyfbybhb6rpomqrn7qy

Combining Generative and Discriminative Models for Hybrid Inference [article]

Victor Garcia Satorras, Zeynep Akata, Max Welling
2019 arXiv   pre-print
We apply our ideas to the Kalman filter, a Gaussian hidden Markov model for time sequences, and show, among other things, that our model can estimate the trajectory of a noisy chaotic Lorenz Attractor  ...  A graphical model is a structured representation of the data generating process. The traditional method to reason over random variables is to perform inference in this graphical model.  ...  The Hidden Markov Process In this section we briefly explain the Hidden Markov Process and how we intend to extend it.  ... 
arXiv:1906.02547v4 fatcat:e5voyr4txzcmbjjwteiwvexhyy

Hidden Markov Model for Time Series Prediction

Muhammad Hanif, Faiza Sami, Mehvish Hyder, Muhammad Iqbal Ch
2017 Journal of Asian Scientific Research  
Hidden Markov Model is one of the most basic and extensively used statistical tools for modeling the discrete time series.  ...  Hidden markov models face some problems like learning about the model, evaluation process and estimate of parameters included in the model.  ...  Hidden Markov Model is one of the most basic and extensively used statistical tools for modeling the discrete time series. A Hidden markov model is a limited learnable stochastic device.  ... 
doi:10.18488/journal.2.2017.75.196.205 fatcat:5nwfmce22ncovli4u627uzunee

Learning dialogue dynamics with the method of moments

Merwan Barlier, Romain Laroche, Olivier Pietquin
2016 2016 IEEE Spoken Language Technology Workshop (SLT)  
Traditionally, Hidden Markov Models (HMMs) would be used to address this problem, involving a first step of handcrafting to build a dialogue model (e.g. defining potential hidden states) followed by applying  ...  In this work, we show that dialogues may be modeled by SP-RFA, a class of graphical models efficiently learnable within the MoM and directly usable in planning algorithms (such as reinforcement learning  ...  Hidden Markov Models [4] are the traditional framework addressing this problem [5] .  ... 
doi:10.1109/slt.2016.7846251 dblp:conf/slt/BarlierLP16 fatcat:b6ob7tg2zjhxvjm2l2eficq2ky

History and Theoretical Basics of Hidden Markov Models [chapter]

Guy Leonard
2011 Hidden Markov Models, Theory and Applications  
In modelling terms, assuming that the Markov property holds is one of a limited number of simple ways of introducing statistical dependence into a model for a stochastic process in such a way that allows  ...  He is best known for his work on the theory of stochastic Markov processes. His research area later became known as Markov process and Markov chains.  ...  The chapter consists of the next following parts: Mathematical basics of Hidden Markov Models Definition of Hidden Markov Models A Hidden Markov Model (cf.  ... 
doi:10.5772/15205 fatcat:ddhfdnt6zbevznvenfgwp6ohv4

Learning Overcomplete HMMs [article]

Vatsal Sharan, Sham Kakade, Percy Liang, Gregory Valiant
2018 arXiv   pre-print
We study the problem of learning overcomplete HMMs---those that have many hidden states but a small output alphabet.  ...  On the other hand, we show that learning is impossible given only a polynomial number of samples for HMMs with a small output alphabet and whose transition matrices are random regular graphs with large  ...  Acknowledgements Sham Kakade acknowledges funding from the Washington Research Foundation for Innovation in Data-intensive Discovery, and the NSF Award CCF-1637360.  ... 
arXiv:1711.02309v2 fatcat:n7wqrbj2tfbd5hto7i4sw3wiwy

Learning the ergodic decomposition [article]

Nabil Al-Najjar, Eran Shmaya
2014 arXiv   pre-print
We prove that his predictions about the near future become ap- proximately those he would have made if he knew the long run empirical frequencies of the process.  ...  A Bayesian agent learns about the structure of a stationary process from ob- serving past outcomes.  ...  We can think about this belief as a hidden markov model where the unobservable process ξ 0 , ξ 1 , . . . is the time that elapsed since last time a war occurred.  ... 
arXiv:1406.6670v1 fatcat:oyi62hgl6vg7hccnpzrbr2t2c4

Exploration of Improved Methodology for Character Image Recognition of Two Popular Indian Scripts using Gabor Feature with Hidden Markov Model

Shubhra Saxena, Vijay Dhaka
2015 International Journal of Computer Applications  
The present work portrays a novel approach in recognizing handwritten cursive character using Hidden Markov Model (HMM) .  ...  The HMM model is proposed to recognize a character image. All the experiments are conducted by using the Matlab tool kit.  ...  OVERVIEW OF HMM AND GABOR FILTER 2.1. Elements of HMM A Hidden Markov Model is a finite learnable stochastic automate.  ... 
doi:10.5120/20459-2817 fatcat:fy5gxphbvfdzdg5srrbaniuyzi

On Learning Finite-State Quantum Sources [article]

Brendan Juba
2009 arXiv   pre-print
We show how prior techniques for learning hidden Markov models can be adapted to the quantum generator model to find that the analogous state of affairs holds: information-theoretically, a polynomial number  ...  We examine the complexity of learning the distributions produced by finite-state quantum sources.  ...  Acknowledgements The author would like to thank Seth Lloyd, Madhu Sudan, and Eran Tromer for discussions that motivated the questions considered here, and Elad Verbin for suggesting the relevance of learning  ... 
arXiv:0910.3713v1 fatcat:6pt62f7f75hyvkezmzkd4gmiie

Page 6293 of Mathematical Reviews Vol. , Issue 2003h [page]

2003 Mathematical Reviews  
It is shown that state sequences of Markov chains and output sequences of hidden Markov models both possess the mixing property, under appropriate conditions.  ...  The notion of equivalence plays an important role for both of these problems. This paper concentrates on scoring criteria to identify models.  ... 
« Previous Showing results 1 — 15 out of 3,492 results