A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2015; you can also visit the original URL.
The file type is application/pdf
.
Filters
Tractable Bayesian learning of tree belief networks
2006
Statistics and computing
In this paper we present decomposable priors, a family of priors over structure and parameters of tree belief nets for which B a yesian learning with complete observations is tractable, in the sense that ...
Besides allowing for exact Bayesian learning, these results permit us to formulate a new class of tractable latent v ariable models in which the likelihood of a data point is computed through an ensemble ...
Discussion This paper has presented decomposable priors, a class of priors over tree structures and parameters that makes exact Bayesian learning tractable. ...
doi:10.1007/s11222-006-5535-3
fatcat:6l7ai7ot4bfffaz3vmedzadlai
The Libra Toolkit for Probabilistic Models
[article]
2015
arXiv
pre-print
The Libra Toolkit is a collection of algorithms for learning and inference with discrete probabilistic models, including Bayesian networks, Markov networks, dependency networks, and sum-product networks ...
Compared to other toolkits, Libra places a greater emphasis on learning the structure of tractable models in which exact inference is efficient. ...
The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ARO, NIH, ...
arXiv:1504.00110v1
fatcat:ez76sv6mergybbra4ecjdbeupy
Look Ma, No Latent Variables: Accurate Cutset Networks via Compilation
2019
International Conference on Machine Learning
Our approach addresses a major limitation of existing techniques that learn cutset networks from data in that their accuracy is quite low as compared to latent variable models such as ensembles of cutset ...
The key idea in our approach is to construct deep cutset networks by not only learning them from data but also compiling them from a more accurate latent tractable model. ...
Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of DARPA ...
dblp:conf/icml/RahmanJG19
fatcat:fy3nh5ntgrgmhbgljoopdsqu54
Cutset Bayesian Networks: A New Representation for Learning Rao-Blackwellised Graphical Models
2019
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
The main idea in CBNs is to partition the variables into two subsets X and Y, learn a (intractable) Bayesian network that represents P(X) and a tractable conditional model that represents P(Y|X). ...
In this paper, we seek to further explore this trade-off between generalization performance and inference accuracy by proposing a novel, partially tractable representation called cutset Bayesian networks ...
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence ...
doi:10.24963/ijcai.2019/797
dblp:conf/ijcai/RahmanJG19
fatcat:fpxg776ig5g3rptbvmrd32ebaa
Cutset Networks: A Simple, Tractable, and Scalable Approach for Improving the Accuracy of Chow-Liu Trees
[chapter]
2014
Lecture Notes in Computer Science
Cutset networks are rooted OR search trees, in which each OR node represents conditioning of a variable in the model, with tree Bayesian networks (Chow-Liu trees) at the leaves. ...
Our experiments on a wide variety of benchmark datasets clearly demonstrate that compared to approaches for learning other tractable models such as thinjunction trees, latent tree models, arithmetic circuits ...
The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of DARPA, AFRL, ARO or the ...
doi:10.1007/978-3-662-44851-9_40
fatcat:lfll64st2ncp7lhu3nq3at7vd4
Propagation Algorithms for Variational Bayesian Learning
2000
Neural Information Processing Systems
We show how the belief propagation and the junction tree algorithms can be used in the inference step of variational Bayesian learning. ...
Variational approximations are becoming a widespread tool for Bayesian learning of graphical models. ...
For conjugate-exponential models, integrating both belief propagation and the junction tree algorithm into the variational Bayesian framework simply amounts to computing expectations of the natural parameters ...
dblp:conf/nips/GhahramaniB00
fatcat:bhmon227hndbfhywajhlt7ktvq
Learning without recall in directed circles and rooted trees
2015
2015 American Control Conference (ACC)
This way, one can realize an exponentially fast rate of learning similar to the case of Bayesian (fully rational) agents. The proposed rules are a special case of the Learning without Recall. ...
This work investigates the case of a network of agents that attempt to learn some unknown state of the world amongst the finitely many possibilities. ...
The authors' ongoing research focuses on the investigation and analysis of belief update rules that provide asymptotic learning in a wider variety of network structures and facilitate the tractable modeling ...
doi:10.1109/acc.2015.7171992
dblp:conf/amcc/RahimianJ15
fatcat:xi6e47gf2rf4zhecg3bo3g375e
A Non-parametric Bayesian Network Prior of Human Pose
2013
2013 IEEE International Conference on Computer Vision
In this work, we introduce a sparse Bayesian network model of human pose that is non-parametric with respect to the estimation of both its graph structure and its local distributions. ...
We describe an efficient sampling scheme for our model and show its tractability for the computation of exact log-likelihoods. ...
Non-parametric Bayesian Networks In this section we introduce our non-parametric Bayesian network model of human pose and show its tractability. ...
doi:10.1109/iccv.2013.162
dblp:conf/iccv/LehrmannGN13
fatcat:cmtsktin3vgbtnevvpijc4akwe
Approximate Inference by Compilation to Arithmetic Circuits
2010
Neural Information Processing Systems
We propose and evaluate a variety of techniques based on exact compilation, forward sampling, AC structure learning, Markov network parameter learning, variational inference, and Gibbs sampling. ...
In experiments on eight challenging real-world domains, we find that the methods based on sampling and learning work best: one such method (AC 2 -F) is faster and usually more accurate than loopy belief ...
The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ARO, DARPA ...
dblp:conf/nips/LowdD10
fatcat:3vugsjcgrnbndi6l7rftmnidwe
Bayesian update of dialogue state for robust dialogue systems
2008
Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing
The technique is based on updating a Bayesian Network that represents the underlying state of a Partially Observable Markov Decision Process (POMDP). ...
This paper presents a new framework for accumulating beliefs in spoken dialogue systems. ...
Figure 1 shows an example network for two time-slices of a two-slot system based on this idea. Belief updating is done with standard Dynamic Bayesian Network algorithms. ...
doi:10.1109/icassp.2008.4518765
dblp:conf/icassp/ThomsonSY08
fatcat:okszuy4ynzgrfo4obtcpa7xcei
Recent Advances in Probabilistic Graphical Models
2015
International Journal of Intelligent Systems
Bayesian networks are the most prominent type of probabilistic graphical models and have experienced a remarkable methodological development during the past two decades. ...
Regardless of the increasing interest in the area, probabilistic graphical models are still facing a number of challenges, covering modeling, inference and learning. ...
Herna´ndez-Gonza´lez et al. learn the parameters of a multidimensional Bayesian network classifier when the data are (subjectively) labeled by a crowd of annotators. ...
doi:10.1002/int.21697
fatcat:cvmcvjrzqvhyfa2u6hxixrqyhq
Naive Bayes models for probability estimation
2005
Proceedings of the 22nd international conference on Machine learning - ICML '05
In this paper we show that, for a wide range of benchmark datasets, naive Bayes models learned using EM have accuracy and learning time comparable to Bayesian networks with context-specific independence ...
However, they are seldom used for general probabilistic learning and inference (i.e., for estimating and computing arbitrary joint, conditional and marginal distributions). ...
Belief propagation is a message-passing algorithm originally used to perform exact inference on tree-structured Bayesian networks. ...
doi:10.1145/1102351.1102418
dblp:conf/icml/LowdD05
fatcat:tygpblqf4fg5zhmk24rozmcwpy
Dynamic Data Feed to Bayesian Network Model and SMILE Web Application
2008
2008 Ninth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing
They are the followings:
Bayesian Network Bayesian networks (also called belief networks, Bayesian belief networks, causal probabilistic networks, or causal networks) (Pearl, 1988 ) are acyclic directed ...
There exist several efficient algorithms, however, that make belief updating in graphs consisting of tens or hundreds of variables tractable. ...
Bayesian networks are a very general and powerful tool that can be used for a large number of problems involving uncertainty: reasoning, learning, planning and perception. ...
doi:10.1109/snpd.2008.67
dblp:conf/snpd/JongsawatPP08
fatcat:p3ei2ffpxrcrxjhrbwn2ovxtau
Approximating Posterior Distributions in Belief Networks Using Mixtures
1997
Neural Information Processing Systems
We derive an efficient algorithm for updating the mixture parameters and apply it to the problem of learning in sigmoid belief networks. ...
Exact inference in densely connected Bayesian networks is computationally intractable, and so there is considerable interest in developing effective approximation schemes. ...
Introduction Bayesian belief networks can be regarded as a fully probabilistic interpretation of feedforward neural networks. ...
dblp:conf/nips/BishopLJJ97
fatcat:fbhwx3kmdrgitnv3r73fchrctm
Locally Bayesian learning in networks
2020
Theoretical Economics
We present a tractable learning rule to implement such locally Bayesian learning: each agent extracts new information using the full history of observed reports in her local network. ...
Despite their limited network knowledge, agents learn correctly when the network is a social quilt, a tree‐like union of cliques. ...
Moreover, locally Bayesian learning is far more tractable than Bayesian learning and is thus potentially useful for other network learning models. Our model can be extended in several directions. ...
doi:10.3982/te3273
fatcat:xdjtnd2gx5davd3vv3dwcwsoqm
« Previous
Showing results 1 — 15 out of 3,170 results