A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Filters
Mixture Representations for Inference and Learning in Boltzmann Machines
[article]
2013
arXiv
pre-print
We present results for both inference and learning to demonstrate the effectiveness of this approach. ...
Boltzmann machines are undirected graphical models with two-state stochastic variables, in which the logarithms of the clique potentials are quadratic functions of the node states. ...
Acknowledgements We are grateful to Tommi Jaakkola for helpful com ments. Also, we would like to thank David MacKay for a stimulating discussion about mean field theory . ...
arXiv:1301.7393v1
fatcat:hkyweq5f5rhdblbtnppx7d77pq
Graphical Models: Foundations of Neural Computation
2002
Pattern Analysis and Applications
A graphical model has both a str~lctural con~ponent-encoded L7p the pattern oI edges in the gsapl1-and a paramctr-ic compo~ient-encoded by ~iumcl-ical "potentials" associated with sets of cdgcs in thc ...
In particular, gencral i r~f i~r o~c c~ ( 7 1~0 -ritlr~~rs allow statistical quantities (such as likelihoods a n d conditional probabilities) and information-theoretic quantities (such as mutual infor ...
The inference and leal-ning algorithms for Boltzmann ~nachines, a n d in particular the treatment of hidden units, ex&nrplify nioi-c gawral solw tions to the problem of inference and learn in^ in graphic ...
doi:10.1007/s100440200036
fatcat:bt75wlwba5hefifledkf62lv4e
Learning multiple layers of representation
2007
Trends in Cognitive Sciences
The generative models that are most familiar in statistics and machine learning are the ones for which the posterior distribution can be inferred efficiently and exactly because the model has been strongly ...
Consider, for example, a mixture of gaussians model in which each data-vector is assumed to come from exactly one of the multivariate gaussian distributions in the mixture. ...
Acknowledgements I thank Yoshua Bengio, David MacKay, Terry Sejnowski and my past and present postdoctoral fellows and graduate students for helping me to understand these ideas, and NSERC, CIAR, CFI and ...
doi:10.1016/j.tics.2007.09.004
pmid:17921042
fatcat:nulwaexkjreh5h7udl4bezd24a
Restricted Boltzmann Machines and Their Extensions for Face Modeling
2017
Biomedical Journal of Scientific & Technical Research
To Abstract Restricted Boltzmann Machines, Deep Boltzmann Machines, and their extensions have brought much attention and become powerful tools for many machine learning tasks. ...
In this paper, we aim to give a review of recent developments of such models for sequential data modelling. ...
Notice that the bias terms for visbile and hidden units are ignored in Eqn. (10) for simplifying the representation. ...
doi:10.26717/bjstr.2017.01.000336
fatcat:mg4czqdceng7viogpu5b4dzlg4
Unsupervised deep learning
[chapter]
2015
Advances in Independent Component Analysis and Learning Machines
After this, we consider various structures used in deep learning, including restricted Boltzmann machines, deep belief networks, deep Boltzmann machines, and nonlinear autoencoders. ...
Deep neural networks with several layers have during the last years become a highly successful and popular research topic in machine learning due to their excellent performance in many benchmark problems ...
Differences between Deep Belief Networks and Deep Boltzmann Machines Thus far we have discussed only learning, but not using these networks for inference and generation of new samples. ...
doi:10.1016/b978-0-12-802806-3.00007-5
fatcat:35unnrrzjbdapgqrpcn4d7vnem
Robust Boltzmann Machines for recognition and denoising
2012
2012 IEEE Conference on Computer Vision and Pattern Recognition
While Boltzmann Machines have been successful at unsupervised learning and density modeling of images and speech data, they can be very sensitive to noise in the data. ...
Image denoising and inpainting correspond to posterior inference in the RoBM. ...
We have described a novel model which allows Boltzmann Machines to be robust to noise and occlusions. ...
doi:10.1109/cvpr.2012.6247936
dblp:conf/cvpr/TangSH12
fatcat:ll4c4ofpbjgj5dt6qocna3vwuu
Where Do Features Come From?
2013
Cognitive Science
One of these generative models, the restricted Boltzmann machine (RBM), has no connections between its hidden units and this makes perceptual inference and learning much simpler. ...
Combining this initialization method with a new method for finetuning the weights finally leads to the first efficient way of training Boltzmann machines with many hidden layers and millions of weights ...
Summary of the main story In about 1986, backpropagation replaced the Boltzmann machine learning algorithm as the method of choice for learning distributed representations. ...
doi:10.1111/cogs.12049
pmid:23800216
fatcat:gsqrcryoazeivp2vjr44n6mv2q
Learning and Selecting Features Jointly with Point-wise Gated Boltzmann Machines
2013
International Conference on Machine Learning
To address this problem, we propose a point-wise gated Boltzmann machine, a unified generative model that combines feature learning and feature selection. ...
Unsupervised feature learning has emerged as a promising tool in learning representations from unlabeled data. ...
Acknowledgments This work was supported in part by NSF IIS 1247414 and a Google Faculty Research Award. ...
dblp:conf/icml/SohnZLL13
fatcat:4xcsch3op5ahjndycwqr5bsvxu
Deep Boltzmann Machines
2009
Journal of machine learning research
We present a new learning algorithm for Boltzmann machines that contain many layers of hidden variables. ...
We present results on the MNIST and NORB datasets showing that deep Boltzmann machines learn good generative models and perform well on handwritten digit and visual object recognition tasks. ...
Acknowledgments We thank Vinod Nair for sharing his code for blurring and translating NORB images. This research was supported by NSERC and Google. ...
dblp:journals/jmlr/SalakhutdinovH09
fatcat:syxpyp5uw5dsdb2rmbnkaswufq
Unsupervised and Supervised Visual Codes with Restricted Boltzmann Machines
[chapter]
2012
Lecture Notes in Computer Science
In this work, we propose a novel visual codebook learning approach using the restricted Boltzmann machine (RBM) as our generative model. Our contribution is three-fold. ...
Firstly, we steer the unsupervised RBM learning using a regularization scheme, which decomposes into a combined prior for the sparsity of each feature's representation as well as the selectivity for each ...
Fig Fig. 1 shows our BoW framework using restricted Boltzmann machines (RBM) to learn visual codes and perform feature coding during inference. ...
doi:10.1007/978-3-642-33715-4_22
fatcat:vjoe6a7qlrdoxhlm424gnptu6y
Switchable Deep Network for Pedestrian Detection
2014
2014 IEEE Conference on Computer Vision and Pattern Recognition
The SDN automatically learns hierarchical features, salience maps, and mixture representations of different body parts. ...
At the part and body levels, it is able to infer the most appropriate template for the mixture models of each part and the whole body. ...
The authors would like to thank Wanli Ouyang and Xingyu Zeng for helpful suggestion, and Yonglong Tian for his main contribution in experiments and part of the derivation of the model. ...
doi:10.1109/cvpr.2014.120
dblp:conf/cvpr/LuoTWT14
fatcat:2ooo25dkbjhkjgd3or2crzjuci
Factorial Hidden Restricted Boltzmann Machines for noise robust speech recognition
2012
2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
We present the Factorial Hidden Restricted Boltzmann Machine (FHRBM) for robust speech recognition. ...
to learn a partsbased representation of noisy speech data that can generalize better to previously unseen noise compositions. ...
INTRODUCTION Restricted Boltzmann machines (RBMs) have recently been applied to several well established problems in machine learning and signal processing, with great success. ...
doi:10.1109/icassp.2012.6288869
dblp:conf/icassp/RennieFD12
fatcat:cbfbcdys55eghi3rxut63z5pdy
Asymmetric Parallel Boltzmann Machines are Belief Networks
1992
Neural Computation
The method is similar to Boltzmann machine learn- ing, but without the “negative phase.” Lack of a negative phase allows learning to proceed significantly faster than in a Boltzmann machine. ...
Mixture models and hidden
Neural Computation 4, 832-834 (1992) © 1992 Massachusetts Institute of Technology
Asymmetric Parallel Boltzmann Machines
ic accident gul W 1 oOo 0
Z pheumonia L <— ae NS A Ba ...
doi:10.1162/neco.1992.4.6.832
fatcat:mfjzujcjqnchhpfwxzb6bcdxoe
Listening with Your Eyes: Towards a Practical Visual Speech Recognition System Using Deep Boltzmann Machines
2015
2015 IEEE International Conference on Computer Vision (ICCV)
This paper presents a novel feature learning method for visual speech recognition using Deep Boltzmann Machines (DBM). ...
features and can be adopted by other deep learning systems. ...
In this paper, we use Deep Boltzmann Machines (DBM) [18] to learn the visual features. ...
doi:10.1109/iccv.2015.26
dblp:conf/iccv/SuiBT15
fatcat:7pulo4baavccxmi6dxltrr6zfm
Variational Learning in Graphical Models and Neural Networks
[chapter]
1998
ICANN 98
Variational methods are becoming increasingly popular for inference and learning in probabilistic models. ...
In this paper we review the underlying framework of variational methods and discuss example applications involving sigmoid belief networks, Boltzmann machines and feed-forward neural networks. ...
Acknowledgements I would like to thank Brendan Frey, Tommi Jaakkola, Michael Jordan, Neil Lawrence, David MacKay and Michael Tipping for helpful discussions regarding variational methods. ...
doi:10.1007/978-1-4471-1599-1_2
fatcat:wwba75whkneo7fvdaf75xvuu44
« Previous
Showing results 1 — 15 out of 3,368 results