Filters








141 Hits in 3.4 sec

Convex Relaxations for Learning Bounded Treewidth Decomposable Graphs [article]

K. S. Sesh Kumar, Francis Bach
2012 arXiv   pre-print
We consider the problem of learning the structure of undirected graphical models with bounded treewidth, within the maximum likelihood framework.  ...  A supergradient method is used to solve the dual problem, with a run-time complexity of O(k^3 n^k+2 n) for each iteration, where n is the number of variables and k is a bound on the treewidth.  ...  We would also like to thank other members of the SIERRA and WILLOW project-teams for helpful discussions.  ... 
arXiv:1212.2573v1 fatcat:s7h5odxm6bd6hncumavdiypat4

Maximizing submodular functions using probabilistic graphical models [article]

K. S. Sesh Kumar, Francis Bach
2013 arXiv   pre-print
These upper bounds may then be jointly maximized with respect to a set, while minimized with respect to the graph, leading to a convex variational inference scheme for maximizing submodular functions,  ...  By considering graphs of increasing treewidths, we may then explore the trade-off between computational complexity and tightness of the relaxation.  ...  We would also like to thank Nino Shervashidze for detailed feedback on the draft.  ... 
arXiv:1309.2593v1 fatcat:x4ckgjwbnnehfl7ypcgvmzl4ni

An Upper Bound on the Global Optimum in Parameter Estimation

Khaled S. Refaat, Adnan Darwiche
2015 Conference on Uncertainty in Artificial Intelligence  
Learning graphical model parameters from incomplete data is a non-convex optimization problem.  ...  We exploit variables that are always observed in the dataset to get an upper bound on the global optimum which can give insight into the quality of the parameters learned by estimation algorithms.  ...  By relaxing these equality constraints, we obtain a convex optimization problem that provides an upper bound on the original optimization problem.  ... 
dblp:conf/uai/RefaatD15 fatcat:7xgteqyfhncv7fx6g5mjt7wzoi

CRVI: Convex Relaxation for Variational Inference

Ghazal Fazelnia, John W. Paisley
2018 International Conference on Machine Learning  
For most models, this optimization is highly non-convex and so hard to solve.  ...  Our theoretical results guarantee very tight relaxation bounds that get nearer to the global optimal solution than traditional coordinate ascent.  ...  In fact, it is upper bounded by a property of a defined graph structure for the original problem which is its treewidth.  ... 
dblp:conf/icml/FazelniaP18 fatcat:awd6e6s5gvb4hlc5b7bcjj5lii

Virtual Network Embedding via Decomposable LP Formulations: Orientations of Small Extraction Width and Beyond [article]

Elias Döhne
2018 arXiv   pre-print
It therefore combines positive traits of heuristics and exact approaches: The runtime is polynomial for instances with bounded EW, and the algorithm returns approximate solutions with high probability.  ...  This algorithm is based on a LP formulation and is FPT in the newly introduced graph parameter extraction width (EW).  ...  ] , stating that such problems can be decided in linear time for graphs with bounded treewidth.  ... 
arXiv:1810.11280v1 fatcat:hhivy7ktsjflbcqze4s764eary

New Limits of Treewidth-based tractability in Optimization [article]

Yuri Faenza, Gonzalo Muñoz, Sebastian Pokutta
2019 arXiv   pre-print
This parameter has been used for decades for analyzing the complexity of various optimization problems and for obtaining tractable algorithms for problems where this parameter is bounded.  ...  An example of this type of structure is given by treewidth: a graph theoretical parameter that measures how "tree-like" a graph is.  ...  Acknowledgements Research reported in this paper was partially supported by NSF CAREER award CMMI-1452463 and by the Institute for Data Valorisation (IVADO).  ... 
arXiv:1807.02551v4 fatcat:xc5pqufrvbhe5frrehjaaquu5u

Large Margin Boltzmann Machines and Large Margin Sigmoid Belief Networks [article]

Xu Miao, Rajesh P.N. Rao
2010 arXiv   pre-print
This probability is data-distribution dependent and is maximized in learning.  ...  LMSBNs in particular allow a very fast inference algorithm for arbitrary graph structures that runs in polynomial time with a high probability.  ...  We implement a convex relaxation-based linear programming algorithm for inference, since in both (Finley and Joachims, 2008) and (Kulesza and Pereira, 2007) , the convex relaxation-based approximate  ... 
arXiv:1003.4781v1 fatcat:uwhwzd4jijasbk6duxyqzmvnre

Conditions beyond treewidth for tightness of higher-order LP relaxations

Mark Rowland, Aldo Pacchiano, Adrian Weller
2017 International Conference on Artificial Intelligence and Statistics  
Our results include showing that for higher order LP relaxations, treewidth is not precisely the right way to characterize tightness.  ...  We consider binary pairwise models and introduce new methods which allow us to demonstrate refined conditions for tightness of LP relaxations in the Sherali-Adams hierarchy.  ...  MR acknowledges support by the UK Engineering and Physical Sciences Research Council (EPSRC) grant EP/L016516/1 for the University of Cambridge Centre for Doctoral Training, the Cambridge Centre for Analysis  ... 
dblp:conf/aistats/RowlandPW17 fatcat:hvvaq55kxzakrjxjkzjpcywmrq

Marginal likelihoods for distributed estimation of graphical model parameters

Zhaoshi Meng, Dennis Wei, Alfred O. Hero, Ami Wiesel
2013 2013 5th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)  
Each node independently estimates local parameters through solving a low-dimensional convex optimization with data collected from its local neighborhood.  ...  We then describe an alternative framework for distributed parameter estimation based on maximizing marginal likelihoods.  ...  The bound for the twohop relaxed MML estimator approximates the bound for the GML estimator closely, which indicates its statistical efficiency.  ... 
doi:10.1109/camsap.2013.6714010 dblp:conf/camsap/MengWHW13 fatcat:uldjp7ut4rcmjcpokpqthzcg7m

Principled Deep Neural Network Training through Linear Programming [article]

Daniel Bienstock, Gonzalo Muñoz, Sebastian Pokutta
2022 arXiv   pre-print
Deep learning has received much attention lately due to the impressive empirical performance achieved by training algorithms.  ...  Consequently, a need for a better theoretical understanding of these problems has become more evident in recent years.  ...  linear regions of a Deep Neural Network [31] , performing inference [2] and providing strong convex relaxations for trained neural networks [3] .  ... 
arXiv:1810.03218v3 fatcat:xz2i3536h5grpmbzdum4esd5qu

Approximation Bounds for Inference using Cooperative Cuts

Stefanie Jegelka, Jeff A. Bilmes
2011 International Conference on Machine Learning  
This family includes models having arbitrary treewidth and arbitrary sized factors.  ...  We thank Jens Vygen for the example of a very hard subadditive function.  ...  For the relaxation, we need to extend f from a set function to a function on a continuous domain.  ... 
dblp:conf/icml/JegelkaB11a fatcat:f4japnnemjd6xmpektyxa2y2uq

The Linear Programming Polytope of Binary Constraint Problems with Bounded Tree-Width [chapter]

Meinolf Sellmann, Luc Mercier, Daniel H. Leventhal
2007 Lecture Notes in Computer Science  
when the BCP that is given has bounded tree-width.  ...  However, the way we post the constraints is quite different so that it suffices to add variables for subsets of size equal to k only.  ...  Another example are search algorithms that exploit problem decomposability, such as polynomial-time algorithms for problems on graphs with bounded tree-width [15] .  ... 
doi:10.1007/978-3-540-72397-4_20 fatcat:h2vpgpfqw5apxbwzugpkmie3um

Merging Strategies for Sum-Product Networks: From Trees to Graphs

Tahrima Rahman, Vibhav Gogate
2016 Conference on Uncertainty in Artificial Intelligence  
cost of slight increase in the learning time.  ...  Although, algorithms for inducing their structure from data have come quite far and often outperform algorithms that induce probabilistic graphical models, a key issue with existing approaches is that  ...  Acknowledgements This research was funded in part by the DARPA Probabilistic Programming for Advanced Machine Learning Program under AFRL prime contract number FA8750-14-C-0005 and by the NSF award 1528037  ... 
dblp:conf/uai/RahmanG16 fatcat:r6i2cksn25ed3g5b6fsr44gpsy

Tractability in constraint satisfaction problems: a survey

Clément Carbonnel, Martin C. Cooper
2015 Constraints  
Acknowledgments We are grateful to Peter Jeavons and StanislavŽivný for their detailed comments on a first draft of this paper, and to the reviewers for their constructive comments.  ...  It is well known that the class of instances whose constraint graph has bounded treewidth is tractable [85] .  ...  (Graphs with treewidth bounded by k are also known as partial k-trees, because they are exactly the subgraphs of k-trees [4] ).  ... 
doi:10.1007/s10601-015-9198-6 fatcat:fl7kxmceh5bqzpyd2c37rtohii

Sidestepping Intractable Inference with Structured Ensemble Cascades

David J. Weiss, Benjamin Sapp, Ben Taskar
2010 Neural Information Processing Systems  
Our framework jointly estimates parameters for all models in the ensemble for each level of the cascade by minimizing a novel, convex loss function, yet requires only a linear increase in computation over  ...  learning or inference in a single tractable sub-model.  ...  for low-treewidth models [4] .  ... 
dblp:conf/nips/WeissST10 fatcat:2z2xx6mzfza4lnkcl73q5dnoo4
« Previous Showing results 1 — 15 out of 141 results