Variational Learning in Graphical Models and Neural Networks [chapter]

Christopher M. Bishop
1998 ICANN 98  
Variational methods are becoming increasingly popular for inference and learning in probabilistic models. By providing bounds on quantities of interest, they offer a more controlled approximation framework than techniques such as Laplace's method, while avoiding the mixing and convergence issues of Markov chain Monte Carlo methods, or the possible computational intractability of exact algorithms. In this paper we review the underlying framework of variational methods and discuss example
more » ... ions involving sigmoid belief networks, Boltzmann machines and feed-forward neural networks.
doi:10.1007/978-1-4471-1599-1_2 fatcat:wwba75whkneo7fvdaf75xvuu44