Filters








620 Hits in 4.0 sec

Monte Carlo Syntax Marginals for Exploring and Using Dependency Parses [article]

Katherine A. Keith, Su Lin Blodgett, Brendan O'Connor
2018 arXiv   pre-print
Second, we demonstrate the usefulness of our Monte Carlo syntax marginal method for parser error analysis and calibration.  ...  However, ambiguity is inherent to natural language syntax, and communicating such ambiguity is important for error analysis and better-informed downstream applications.  ...  Emma Strubell, and the anonymous reviewers for their helpful comments.  ... 
arXiv:1804.06004v1 fatcat:r3nhbyvohjcfzplllrghibzibm

Monte Carlo Syntax Marginals for Exploring and Using Dependency Parses

Katherine Keith, Su Lin Blodgett, Brendan O'Connor
2018 Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)  
Second, we demonstrate the usefulness of our Monte Carlo syntax marginal method for parser error analysis and calibration.  ...  However, ambiguity is inherent to natural language syntax, and communicating such ambiguity is important for error analysis and better-informed downstream applications.  ...  Acknowledgments The authors would like to thank Rajarshi Das, Daniel Cohen, Abe Handler, Graham Neubig, Emma Strubell, and the anonymous reviewers for their helpful comments.  ... 
doi:10.18653/v1/n18-1084 dblp:conf/naacl/KeithBO18 fatcat:3sptkvng5ravfeuebzhu5avu2m

BCM: toolkit for Bayesian analysis of Computational Models using samplers

Bram Thijssen, Tjeerd M. H. Dijkstra, Tom Heskes, Lodewyk F. A. Wessels
2016 BMC Systems Biology  
It provides efficient, multithreaded implementations of eleven algorithms for sampling from posterior probability distributions and for calculating marginal likelihoods.  ...  This uncertainty can be analyzed with Bayesian statistics, however, the sampling algorithms that are frequently used for calculating Bayesian statistical estimates are computationally demanding, and each  ...  BioBayes uses parallel-tempered Markov Chain Monte Carlo, ABC-SysBio uses sequential Monte Carlo sampling in combination with Approximate Bayesian Computation, SYSBIONS uses nested sampling, and Stan uses  ... 
doi:10.1186/s12918-016-0339-3 pmid:27769238 pmcid:PMC5073811 fatcat:kqla2ocfrnel5ntmzua7qqk24y

From Language to Programs: Bridging Reinforcement Learning and Maximum Marginal Likelihood

Kelvin Guu, Panupong Pasupat, Evan Liu, Percy Liang
2017 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)  
We apply our learning algorithm to a new neural semantic parser and show significant gains over existing state-of-theart results on a recent context-dependent semantic parsing task.  ...  We connect two common learning paradigms, reinforcement learning (RL) and maximum marginal likelihood (MML), and then present a new learning algorithm that combines the strengths of both.  ...  The majority of these methods employ Monte Carlo sampling for exploration.  ... 
doi:10.18653/v1/p17-1097 dblp:conf/acl/GuuPLL17 fatcat:nrfhemntvfhoviqhbaoivlaoee

From Language to Programs: Bridging Reinforcement Learning and Maximum Marginal Likelihood [article]

Kelvin Guu, Panupong Pasupat, Evan Zheran Liu, Percy Liang
2017 arXiv   pre-print
We apply our learning algorithm to a new neural semantic parser and show significant gains over existing state-of-the-art results on a recent context-dependent semantic parsing task.  ...  We connect two common learning paradigms, reinforcement learning (RL) and maximum marginal likelihood (MML), and then present a new learning algorithm that combines the strengths of both.  ...  The majority of these methods employ Monte Carlo sampling for exploration.  ... 
arXiv:1704.07926v1 fatcat:kdcoude32bbf3kcrxemxykdruq

Latent Structure Models for Natural Language Processing

André F. T. Martins, Tsvetomila Mihaylova, Nikita Nangia, Vlad Niculae
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts  
e.g., syntax trees and dependency parsing. • machine learning: familiarity with neural networks for NLP, basic understanding of backpropagation and computation graphs.  ...  Estimated stochastic gradients are typically obtained with a combination of Monte Carlo sampling and the score function estimator (a.k.a. REINFORCE, Williams, 1992) .  ... 
doi:10.18653/v1/p19-4001 dblp:conf/acl/MartinsMNN19 fatcat:5vdzt5p23ncqrcngougdsddcx4

The Return of Lexical Dependencies: Neural Lexicalized PCFGs [article]

Hao Zhu, Yonatan Bisk, Graham Neubig
2020 arXiv   pre-print
However, in this work, we present novel neural models of lexicalized PCFGs which allow us to overcome sparsity problems and effectively induce both constituents and dependencies within a single model.  ...  In this paper we demonstrate that context free grammar (CFG) based methods for grammar induction benefit from modeling lexical dependencies.  ...  The authors would like to thank Junxian He and Yoon Kim for helpful feedback about the project.  ... 
arXiv:2007.15135v1 fatcat:4w5elylabjef5npcnt5dgg67vm

The Return of Lexical Dependencies: Neural Lexicalized PCFGs

Hao Zhu, Yonatan Bisk, Graham Neubig
2020 Transactions of the Association for Computational Linguistics  
However, in this work, we present novel neural models of lexicalized PCFGs that allow us to overcome sparsity problems and effectively induce both constituents and dependencies within a single model.  ...  In this paper we demonstrate that context free grammar (CFG) based methods for grammar induction benefit from modeling lexical dependencies.  ...  The authors would like to thank Junxian He and Yoon Kim for helpful feedback about the project.  ... 
doi:10.1162/tacl_a_00337 fatcat:otrejnbzgfgf5bhxhkqoycfy54

Unsupervised Recurrent Neural Network Grammars

Yoon Kim, Alexander Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, Gábor Melis
2019 Proceedings of the 2019 Conference of the North  
Recurrent neural network grammars (RNNG) are generative models of language which jointly model syntax and surface structure by incrementally generating a syntax tree and sentence in a top-down, left-to-right  ...  Supervised RNNGs achieve strong language modeling and parsing performance, but require an annotated corpus of parse trees. In this work, we experiment with unsupervised learning of RNNGs.  ...  Acknowledgments We thank the members of the DeepMind language team for helpful feedback. YK is supported by a Google Fellowship. AR is supported by NSF Career 1845664.  ... 
doi:10.18653/v1/n19-1114 dblp:conf/naacl/KimRYKDM19 fatcat:3a426goq35eglonsdj5tbqbjla

Dependency Induction Through the Lens of Visual Perception [article]

Ruisi Su, Shruti Rijhwani, Hao Zhu, Junxian He, Xinyu Wang, Yonatan Bisk, Graham Neubig
2021 arXiv   pre-print
Next, we propose an extension of our model that leverages both word concreteness and visual semantic role labels in constituency and dependency parsing.  ...  However, because the signal provided by text alone is limited, recently introduced visually grounded syntax models make use of multimodal information leading to improved performance in constituency grammar  ...  Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.  ... 
arXiv:2109.09790v1 fatcat:26ho4gusgzh4hfuetewaosm27u

Unsupervised Recurrent Neural Network Grammars [article]

Yoon Kim, Alexander M. Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, Gábor Melis
2019 arXiv   pre-print
Recurrent neural network grammars (RNNG) are generative models of language which jointly model syntax and surface structure by incrementally generating a syntax tree and sentence in a top-down, left-to-right  ...  Supervised RNNGs achieve strong language modeling and parsing performance, but require an annotated corpus of parse trees. In this work, we experiment with unsupervised learning of RNNGs.  ...  Acknowledgments We thank the members of the DeepMind language team for helpful feedback. YK is supported by a Google Fellowship. AR is supported by NSF Career 1845664.  ... 
arXiv:1904.03746v6 fatcat:4al4smclenh37oyxntsyicqnxm

Unreliable numbers: error and harm induced by bad design can be reduced by better design

Harold Thimbleby, Patrick Oladimeji, Paul Cairns
2015 Journal of the Royal Society Interface  
We show that Monte Carlo methods enable designers to explore the implications of normal and unexpected operator behaviour, and to design systems to be more resilient to use error.  ...  We show that Monte Carlo methods can quickly and easily compare the reliability of different number entry systems.  ...  The present paper is the first to consider operator error correction and the behaviour of delete and clear keys. Monte Carlo methods use a random process to explore a state space.  ... 
doi:10.1098/rsif.2015.0685 pmid:26354830 pmcid:PMC4614478 fatcat:ohbimnkoqvdmxie7dx2jgnftkq

Differentiable Perturb-and-Parse: Semi-Supervised Parsing with a Structured Variational Autoencoder [article]

Caio Corro, Ivan Titov
2019 arXiv   pre-print
To this end, we propose a novel latent-variable generative model for semi-supervised syntactic dependency parsing.  ...  Human annotation for syntactic parsing is expensive, and large resources are available only for a fraction of languages.  ...  ACKNOWLEDGMENTS We thank Diego Marcheggiani, Wilker Ferreira Aziz and Serhii Havrylov for their comments and suggestions. We thank the anonymous reviewers for their comments.  ... 
arXiv:1807.09875v2 fatcat:t6ep446e2nd6dmgd7hackdwtjm

Structured Prediction of Sequences and Trees Using Infinite Contexts [chapter]

Ehsan Shareghi, Gholamreza Haffari, Trevor Cohn, Ann Nicholson
2015 Lecture Notes in Computer Science  
We propose prediction algorithms based on A* and Markov Chain Monte Carlo sampling.  ...  Empirical results demonstrate the potential of our model compared to baseline finite-context Markov models on part-of-speech tagging and syntactic parsing.  ...  We have shown how to perform prediction based on our model to predict the parse tree of a given utterance using various search algorithms, e.g. A* and Markov Chain Monte Carlo.  ... 
doi:10.1007/978-3-319-23525-7_23 fatcat:kpevaszn65a3xnsbyblskvvlk4

Structured Prediction of Sequences and Trees using Infinite Contexts [article]

Ehsan Shareghi, Gholamreza Haffari, Trevor Cohn, Ann Nicholson
2015 arXiv   pre-print
We propose prediction algorithms based on A* and Markov Chain Monte Carlo sampling.  ...  Empirical results demonstrate the potential of our model compared to baseline finite-context Markov models on part-of-speech tagging and syntactic parsing.  ...  We have shown how to perform prediction based on our model to predict the parse tree of a given utterance using various search algorithms, e.g. A* and Markov Chain Monte Carlo.  ... 
arXiv:1503.02417v1 fatcat:7txfrppwkvc5tnutoteisub6ue
« Previous Showing results 1 — 15 out of 620 results