Filters








59,761 Hits in 10.0 sec

Active Learning of Equivalence Relations by Minimizing the Expected Loss Using Constraint Inference

Steffen Rendle, Lars Schmidt-Thieme
2008 2008 Eighth IEEE International Conference on Data Mining  
This technique makes use of inference of expected constraints.  ...  For selecting queries that result in a large number of meaningful constraints, we present an approximative optimal selection technique that greedily minimizes the expected loss in each round of active  ...  Acknowledgements This work was funded by the X-Media project (www.xmedia-project.org) sponsored by the European Commission as part of the Information Society Technologies (IST) programme under EC grant  ... 
doi:10.1109/icdm.2008.41 dblp:conf/icdm/RendleS08 fatcat:foqa4xzs5fhaxkshrgxtsx5tna

Fused regression for multi-source gene regulatory network inference [article]

Kari Y. Lam, Zachary M. Westrick, Christian Müller, Lionel Christiaen, Richard Bonneau
2016 biorxiv/medrxiv   pre-print
Most approaches consider the problem of network inference independently in each species, despite evidence that gene regulation can be conserved even in distantly related species.  ...  We refine this method by presenting an algorithm that extracts the true conserved subnetwork from a larger set of potentially conserved interactions and demonstrate the utility of our method in cross species  ...  RB was supported by the Simons Foundation and US National Science Foundation grants IOS-1126971, CBET-1067596 and CHE-1151554, and National Analyzed the data: KYL ZMW RB.  ... 
doi:10.1101/049775 fatcat:n375dzpbkvcojomf2fmr57mqci

Hybrid SRL with Optimization Modulo Theories [article]

Stefano Teso and Roberto Sebastiani and Andrea Passerini
2014 arXiv   pre-print
From a statistical-relational learning (SRL) viewpoint, the task can be interpreted as a constraint satisfaction problem, i.e. the generated objects must obey a set of soft constraints, whose weights are  ...  We also present a few examples of constructive learning applications enabled by our method.  ...  Introduction Traditional statistical-relational learning (SRL) methods allow to reason and make inference about relational objects characterized by a set of soft constraints [1] .  ... 
arXiv:1402.4354v1 fatcat:eucavgxvibeyvc6pt27hvgpqri

Fused Regression for Multi-source Gene Regulatory Network Inference

Kari Y. Lam, Zachary M. Westrick, Christian L. Müller, Lionel Christiaen, Richard Bonneau, Florian Markowetz
2016 PLoS Computational Biology  
We then introduce an PLOS Computational Biology | extension of the method to deal with the condition of uncertainty over the degree of regulatory conservation by simultaneously inferring gene conservation  ...  The presence of shared structure in a well studied model system or process should make the problem of network inference in a related process easier, but this information is not often applied to the discovery  ...  RB was supported by the Simons Foundation and US National Science Foundation grants IOS-1126971, CBET-1067596 and CHE-1151554, and National Analyzed the data: KYL ZMW RB.  ... 
doi:10.1371/journal.pcbi.1005157 pmid:27923054 pmcid:PMC5140053 fatcat:7v2hpzsv2rejpbva34e72my7ga

Hinge-loss Markov Random Fields: Convex Inference for Structured Prediction [article]

Stephen Bach, Bert Huang, Ben London, Lise Getoor
2013 arXiv   pre-print
We introduce the first inference algorithm that is both scalable and applicable to the full class of HL-MRFs, and show how to train HL-MRFs with several learning algorithms.  ...  Instead of working in a combinatorial space, we use hinge-loss Markov random fields (HL-MRFs), an expressive class of graphical models with log-concave density functions over continuous variables, which  ...  , of IARPA, DoI/NBC, or the U.S.  ... 
arXiv:1309.6813v1 fatcat:7qs5govmtfcaxnjmtmzejtn5ju

Learning Weighted Lower Linear Envelope Potentials in Binary Markov Random Fields

Stephen Gould
2015 IEEE Transactions on Pattern Analysis and Machine Intelligence  
Then, with tractable inference in hand, we show how the parameters of the lower linear envelope potentials can be estimated from labeled training data within a max-margin learning framework.  ...  In computer vision an important class of constraints encode a preference for label consistency over large sets of pixels and can be modeled using higher-order terms known as lower linear envelope potentials  ...  Lemma 3.4.: Unconstrained (binary) minimization of the function E c (y c , z) over z is equivalent to minimization of E c (y c , z) subject to the constraints z k+1 ≤ z k .  ... 
doi:10.1109/tpami.2014.2366760 pmid:26352443 fatcat:4pj5piqv6bhgrlnkd2ntpozqie

Information Dropout: Learning Optimal Representations Through Noisy Computation

Alessandro Achille, Stefano Soatto
2018 IEEE Transactions on Pattern Analysis and Machine Intelligence  
The cross-entropy loss commonly used in deep learning is closely related to the defining properties of optimal representations, but does not enforce some of the key properties.  ...  We show that this can be solved by adding a regularization term, which is in turn related to injecting multiplicative noise in the activations of a Deep Neural Network, a special case of which is the common  ...  ACKNOWLEDGMENTS Work supported by ARO, ONR, AFOSR. We are very grateful to the reviewers for their through analysis of the paper.  ... 
doi:10.1109/tpami.2017.2784440 pmid:29994167 fatcat:ejcrnroedvhtxjl4vb7vj3vwgu

Exploring Compositional High Order Pattern Potentials for Structured Output Learning

Yujia Li, Daniel Tarlow, Richard Zemel
2013 2013 IEEE Conference on Computer Vision and Pattern Recognition  
in conjunction with other model potentials to minimize expected loss;and (b) learning an image-dependent mapping that encourages or inhibits patterns depending on image features.  ...  We show that CHOPPs include the linear deviation pattern potentials of Rother et al. [26] and also Restricted Boltzmann Machines (RBMs); we also establish the near equivalence of these two models.  ...  Instead, we train the model to minimize expected loss which we believe allows the model to more globally learn the distribution.  ... 
doi:10.1109/cvpr.2013.14 dblp:conf/cvpr/LiTZ13 fatcat:nixxk5z72zcr5feinccsjfl2ha

Margin-Based Active Learning for Structured Output Spaces [chapter]

Dan Roth, Kevin Small
2006 Lecture Notes in Computer Science  
Typically, these structured output scenarios are also characterized by a high cost associated with obtaining supervised training data, motivating the study of active learning for these situations.  ...  In many complex machine learning applications there is a need to learn multiple interdependent output variables, where knowledge of these interdependencies can be exploited to improve the global performance  ...  Acknowledgments The authors would like to thank Ming-Wei Chang, Vasin Punyakanok, Alex Klementiev, Nick Rizzolo, and the reviewers for helpful comments and/or dis-  ... 
doi:10.1007/11871842_40 fatcat:gpb3d3gy3zbtddebonuor3vupu

Machine Teaching for Inverse Reinforcement Learning: Algorithms and Applications

Daniel S. Brown, Scott Niekum
2019 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
equivalence class of the demonstrator.  ...  We apply our proposed machine teaching algorithm to two novel applications: providing a lower bound on the number of queries needed to learn a policy using active IRL and developing a novel IRL algorithm  ...  Acknowledgments This work has taken place in the Personal Autonomous Robotics Lab (PeARL) at The University of Texas at Austin.  ... 
doi:10.1609/aaai.v33i01.33017749 fatcat:ciylhomm3vcf5pqedaywgeq7li

Active Learning for Probabilistic Structured Prediction of Cuts and Matchings

Sima Behpour, Anqi Liu, Brian D. Ziebart
2019 International Conference on Machine Learning  
However, computational time complexity limits prevalent probabilistic methods from effectively supporting active learning.  ...  We propose an adversarial approach for active learning with structured prediction domains that is tractable for cuts and matching.  ...  Acknowledgements This work was supported, in part, by the National Science Foundation under Grant No. 1652530.  ... 
dblp:conf/icml/BehpourLZ19 fatcat:3kienvxtwnfflgmlpa7mtuz744

Learning for Structured Prediction Using Approximate Subgradient Descent with Working Sets

Aurelien Lucchi, Yunpeng Li, Pascal Fua
2013 2013 IEEE Conference on Computer Vision and Pattern Recognition  
We propose a working set based approximate subgradient descent algorithm to minimize the margin-sensitive hinge loss arising from the soft constraints in max-margin learning frameworks, such as the structured  ...  be used to reduce learning time at only a small cost of performance.  ...  Related work Maximum margin learning of CRFs was first formulated in the max-margin Markov networks (M 3 N) [26] , whose objective is to minimize a margin-sensitive hinge loss between the ground-truth  ... 
doi:10.1109/cvpr.2013.259 dblp:conf/cvpr/LucchiLF13 fatcat:k72mmfmet5edll4kgqv5axcyc4

Hinge-Loss Markov Random Fields and Probabilistic Soft Logic [article]

Stephen H. Bach, Matthias Broecheler, Bert Huang, Lise Getoor
2017 arXiv   pre-print
The first, hinge-loss Markov random fields (HL-MRFs), is a new kind of probabilistic graphical model that generalizes different approaches to convex inference.  ...  We then show how to learn the parameters of HL-MRFs. The learned HL-MRFs are as accurate as analogous discrete models, but much more scalable.  ...  Acknowledgments We acknowledge the many people who have contributed to the development of HL-MRFs and PSL.  ... 
arXiv:1505.04406v3 fatcat:msjfalt6nrfxfo37fe5yrc536y

Machine Teaching for Inverse Reinforcement Learning: Algorithms and Applications [article]

Daniel S. Brown, Scott Niekum
2019 arXiv   pre-print
equivalence class of the demonstrator.  ...  We apply our proposed machine teaching algorithm to two novel applications: providing a lower bound on the number of queries needed to learn a policy using active IRL and developing a novel IRL algorithm  ...  We measured the 0-1 policy loss (Michini et al. 2015) for each demonstration set by computing the percentage of states where the resulting policy took a suboptimal action under the true reward.  ... 
arXiv:1805.07687v7 fatcat:a3j5kt5e7ndmxcglkosl4wowmi

Information Dropout: Learning Optimal Representations Through Noisy Computation [article]

Alessandro Achille, Stefano Soatto
2017 arXiv   pre-print
The cross-entropy loss commonly used in deep learning is closely related to the defining properties of optimal representations, but does not enforce some of the key properties.  ...  We show that this can be solved by adding a regularization term, which is in turn related to injecting multiplicative noise in the activations of a Deep Neural Network, a special case of which is the common  ...  Acknowledgments Work supported by ARO, ONR, AFOSR.  ... 
arXiv:1611.01353v3 fatcat:zkysgik6uza5dil2t3vi2s7l4m
« Previous Showing results 1 — 15 out of 59,761 results