Filters








5,787 Hits in 5.4 sec

Contextual classification with functional Max-Margin Markov Networks

Daniel Munoz, J. Andrew Bagnell, Nicolas Vandapel, Martial Hebert
2009 2009 IEEE Conference on Computer Vision and Pattern Recognition  
In this work we adapt a functional gradient approach for learning high-dimensional parameters of random fields in order to perform discrete, multi-label classification.  ...  To this end, the Markov Random Field framework has proven to be a model of choice as it uses contextual information to yield improved classification results over locally independent classifiers.  ...  Huang, and the reviewers for their comments, and S. K. Divvala for help with the geometric surface experiments. Accuracy: 72.8% Accuracy: 84.9% Accuracy: 86.0% Accuracy: 87.1% Table 2 .  ... 
doi:10.1109/cvpr.2009.5206590 dblp:conf/cvpr/MunozBVH09 fatcat:2dwhbyvqkfhunifixzafdlfvoy

Boosted Markov Networks for Activity Recognition

Tran The Truyen, Hung Hai Bui, S. Venkatesh
2005 2005 International Conference on Intelligent Sensors, Sensor Networks and Information Processing  
We address the recognition of multilevel activities in this paper via a conditional Markov random field (MRF), known as the dynamic conditional random field (DCRF).  ...  Distinct from most existing work, our algorithm can handle hidden variables (missing labels) and is particularly attractive for smarthouse domains where reliable labels are often sparsely observed.  ...  AdaBoost.MRF In this section we describe AdaBoost.MRF, the boosting algorithm for parameter estimation of general Markov random fields.  ... 
doi:10.1109/issnip.2005.1595594 fatcat:e2no6a7ydvf2jgidnwwa3atll4

Conditional Neural Fields

Jian Peng, Liefeng Bo, Jinbo Xu
2009 Neural Information Processing Systems  
Conditional random fields (CRF) are widely used for sequence labeling such as natural language processing and biological sequence analysis.  ...  To model the nonlinear relationship between input and output we propose a new conditional probabilistic graphical model, Conditional Neural Fields (CNF), for sequence labeling.  ...  Acknowledgements We thank Nathan Srebro and David McAllester for insightful discussions.  ... 
dblp:conf/nips/PengBX09 fatcat:je23fhvqcrf6hg6yppxlcdjrgi

An Introduction to Conditional Random Fields [article]

Charles Sutton, Andrew McCallum
2010 arXiv   pre-print
This tutorial describes conditional random fields, a popular probabilistic method for structured prediction.  ...  We describe methods for inference and parameter estimation for CRFs, including practical issues for implementing large scale CRFs.  ...  Acknowledgments We thank Francine Chen, Benson Limketkai, Gregory Druck, Kedar Bellare, and Ray Mooney for useful comments on earlier versions of this tutorial.  ... 
arXiv:1011.4088v1 fatcat:bhbw6i74cfg35csu3wboyqubee

A Generalized Mean Field Algorithm for Variational Inference in Exponential Families [article]

Eric P. Xing, Michael I. Jordan, Stuart Russell
2012 arXiv   pre-print
We present a class of generalized mean field (GMF) algorithms for approximate inference in complex exponential family models, which entails limiting the optimization over the class of cluster-factorizable  ...  But due to requirement for model-specific derivation of the optimization equations and unclear inference quality in various models, it is not widely used as a generic approximate inference algorithm.  ...  Acknowledgments We thank Yair Weiss and colleagues for their generos ity in sharing their code for exact inference and GBP on grids.  ... 
arXiv:1212.2512v1 fatcat:hbdgnfrza5d5dkgle3op4a2kiy

Primal sparse Max-margin Markov networks

Jun Zhu, Eric P. Xing, Bo Zhang
2009 Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '09  
Max-margin Markov networks (M 3 N) have shown great promise in structured prediction and relational learning. Due to the KKT conditions, the M 3 N enjoys dual sparsity.  ...  In this paper, we present an 1-norm regularized max-margin Markov network ( 1-M 3 N), which enjoys dual and primal sparsity simultaneously.  ...  Discriminative models, such as conditional random fields (CRFs) [16] and max-margin Markov networks (M 3 N) [20] , usually have a complex and high-dimensional feature space, because in principle they  ... 
doi:10.1145/1557019.1557132 dblp:conf/kdd/ZhuXZ09 fatcat:4qj5ls34i5ahtbhokrt6mjlu5i

Fully Connected Deep Structured Networks [article]

Alexander G. Schwing, Raquel Urtasun
2015 arXiv   pre-print
Hereby, convolutional networks are trained to provide good local pixel-wise features for the second step being traditionally a more global graphical model.  ...  Convolutional neural networks with many layers have recently been shown to achieve excellent results on many high-level tasks such as image classification, object detection and more recently also semantic  ...  Joint training of conditional random fields and deep networks was also discussed recently by [4] for graphical models in general.  ... 
arXiv:1503.02351v1 fatcat:vwtmscvzhzfxxfc2akza3gcxwe

An Introduction to Restricted Boltzmann Machines [chapter]

Asja Fischer, Christian Igel
2012 Lecture Notes in Computer Science  
Boltzmann machines can also be regarded as particular graphical models [22], more precisely undirected graphical models also known as Markov random fields.  ...  Different learning algorithms for RBMs are discussed.  ...  The authors acknowledge support from the German Federal Ministry of Education and Research within the National Network Computational Neuroscience under grant number 01GQ0951 (Bernstein Fokus "Learning  ... 
doi:10.1007/978-3-642-33275-3_2 fatcat:eiwy33ozsfhlveeowopvwzrjua

Sparse Markov net learning with priors on regularization parameters

Katya Scheinberg, Irina Rish, Narges Bani Asadi
2010 International Symposium on Artificial Intelligence and Mathematics  
Our general formulation allows a vector of regularization parameters and is well-suited for learning structured graphs such as scale-free networks where the sparsity of nodes varies significantly.  ...  In this paper, we consider the problem of structure recovery in Markov Network over Gaussian variables, that is equivalent to finding the zero-pattern of the sparse inverse covariance matrix.  ...  Our Approach Let X = {X 1 , ..., X p } be a set of p random variables, and let G = (V, E) be a Markov network (a Markov Random Field, or MRF) representing the conditional independence structure of the  ... 
dblp:conf/isaim/ScheinbergRA10 fatcat:eq6l3gs3ffelzlrtbn3ttvregi

On primal and dual sparsity of Markov networks

Jun Zhu, Eric P. Xing
2009 Proceedings of the 26th Annual International Conference on Machine Learning - ICML '09  
Combining these two methods, an 1 -norm max-margin Markov network ( 1 -M 3 N) can achieve both types of sparsity.  ...  This paper analyzes its connections to the Laplace maxmargin Markov network (LapM 3 N), which inherits the dual sparsity of max-margin models but is pseudo-primal sparse, and to a novel adaptive M 3 N  ...  and 60605003; National Key Foundation R&D Projects 2003CB317007, 2004CB318108 and 2007CB311003; and Basic Research Foundation of Tsinghua National TNList Lab.  ... 
doi:10.1145/1553374.1553536 dblp:conf/icml/ZhuX09 fatcat:zsa6onb33ngc5anrxvhwvclkay

A Fast Variational Approach for Learning Markov Random Field Language Models

Yacine Jernite, Alexander M. Rush, David A. Sontag
2015 International Conference on Machine Learning  
We present a method for global-likelihood optimization of a Markov random field language model exploiting long-range contexts in time independent of the corpus size.  ...  We demonstrate the efficiency of this method both for language modelling and for part-of-speech tagging.  ...  Acknowledgments YJ and DS gratefully acknowledge the support of the Defense Advanced Research Projects Agency (DARPA) Probabilistic Programming for Avanced Machine Learning Program under Air Force Research  ... 
dblp:conf/icml/JerniteRS15 fatcat:wrnxjco3yzg5zo27gd52occ6a4

Markov Logic [chapter]

Pedro Domingos, Stanley Kok, Daniel Lowd, Hoifung Poon, Matthew Richardson, Parag Singla
2008 Lecture Notes in Computer Science  
Markov logic accomplishes this by attaching weights to first-order formulas and viewing them as templates for features of Markov networks.  ...  Inference algorithms for Markov logic draw on ideas from satisfiability, Markov chain Monte Carlo and knowledge-based model construction.  ...  , a Sloan Fellowship and NSF CAREER Award to the first author, and a Microsoft Research fellowship awarded to the third author.  ... 
doi:10.1007/978-3-540-78652-8_4 fatcat:gx4recbhrfbatpeus7usthj3by

Learning with Blocks: Composite Likelihood and Contrastive Divergence

Arthur U. Asuncion, Qiang Liu, Alexander T. Ihler, Padhraic Smyth
2010 Journal of machine learning research  
We empirically analyze the performance of blocked contrastive divergence on various models, including visible Boltzmann machines, conditional random fields, and exponential random graph models, and we  ...  In this paper, we present a formal connection between the optimization of composite likelihoods and the well-known contrastive divergence algorithm.  ...  under grant AR-44882 BIRT revision (AI, QL), and by Google (PS).  ... 
dblp:journals/jmlr/AsuncionLIS10 fatcat:pfkqmbfqgvgbnfu4fhcghvcaju

Structured Prediction in NLP – A survey [article]

Chauhan Dev, Naman Biyani, Nirmal P. Suthar, Prashant Kumar, Priyanshu Agarwal
2021 arXiv   pre-print
some detailed ideas for future research in these fields.  ...  Over the last several years, the field of Structured prediction in NLP has had seen huge advancements with sophisticated probabilistic graphical models, energy-based networks, and its combination with  ...  They explain how they can implement a linear-chain conditional random field and a graph-based parsing model as classes of structured attention networks, which are equivalent to neural network layers.  ... 
arXiv:2110.02057v1 fatcat:ti3isjlburcqtc7nisoel54kbe

Structured Prediction via the Extragradient Method

Benjamin Taskar, Simon Lacoste-Julien, Michael I. Jordan
2005 Neural Information Processing Systems  
We present a simple and scalable algorithm for large-margin estimation of structured models, including an important class of Markov networks and combinatorial models.  ...  We formulate the estimation problem as a convex-concave saddle-point problem and apply the extragradient method, yielding an algorithm with linear convergence using simple gradient and projection calculations  ...  Acknowledgments We thank Paul Tseng for kindly answering our questions about his min-cost flow code.  ... 
dblp:conf/nips/TaskarLJ05 fatcat:hlvszmtsrfeg5pmwlhqk5wvr24
« Previous Showing results 1 — 15 out of 5,787 results