A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Filters
Modified Dirichlet Distribution: Allowing Negative Parameters to Induce Stronger Sparsity
2016
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
Our experiments on learning Gaussian mixtures and unsupervised dependency parsing demonstrate the advantage of mDir over Dir. ...
posterior inference. ...
Unsupervised Dependency Parsing Unsupervised dependency parsing aims to learn a dependency grammar from unannotated text. ...
doi:10.18653/v1/d16-1208
dblp:conf/emnlp/Tu16
fatcat:tctdkvnudvaebmjtboiftrbpxy
Unambiguity Regularization for Unsupervised Learning of Probabilistic Grammars
2012
Conference on Empirical Methods in Natural Language Processing
In our experiments of unsupervised dependency grammar learning, we show that unambiguity regularization is beneficial to learning, and in combination with annealing (of the regularization strength) and ...
We incorporate an inductive bias into grammar learning in favor of grammars that lead to unambiguous parses on natural language sentences. ...
Any opinion, finding, and conclusions contained in this article are those of the authors and do not necessarily reflect the views of the National Science Foundation. ...
dblp:conf/emnlp/TuH12
fatcat:klcucroicbamxgj2hbkkymn33m
Posterior vs Parameter Sparsity in Latent Variable Models
2009
Neural Information Processing Systems
In order to express this bias of posterior sparsity as opposed to parametric sparsity, we extend the posterior regularization framework [7] . ...
We address the problem of learning structured unsupervised models with moment sparsity typical in many natural language induction tasks. ...
Ganchev was supported by ARO MURI SUBTLE W911NF-07-1-0216 The authors would like to thank Mark Johnson and Jianfeng Gao for their help in reproducing the VEM results. ...
dblp:conf/nips/GracaGTP09
fatcat:raqkxvujhbgqfm7dshykgjjafy
A Survey of Unsupervised Dependency Parsing
[article]
2020
arXiv
pre-print
Syntactic dependency parsing is an important task in natural language processing. ...
Unsupervised dependency parsing aims to learn a dependency parser from sentences that have no annotation of their correct parse trees. ...
Gillenwater et al. (2011) add a posterior regularization term to encourage rule sparsity. ...
arXiv:2010.01535v1
fatcat:4wd4dgducnbeti6kukpmfw5r4i
A convex and feature-rich discriminative approach to dependency grammar induction
2015
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
In this paper, we introduce a new method for the problem of unsupervised dependency parsing. Most current approaches are based on generative models. ...
Our method can easily be generalized to other unsupervised learning problems. ...
In this work, we are interested in unsupervised dependency parsing. ...
doi:10.3115/v1/p15-1133
dblp:conf/acl/GraveE15
fatcat:fh2xdc7ks5hc3mreyx3xi4axum
The Return of Lexical Dependencies: Neural Lexicalized PCFGs
[article]
2020
arXiv
pre-print
However, in this work, we present novel neural models of lexicalized PCFGs which allow us to overcome sparsity problems and effectively induce both constituents and dependencies within a single model. ...
Previous approaches to marry these two disparate syntactic formalisms (e.g. lexicalized PCFGs) have been plagued by sparsity, making them unsuitable for unsupervised grammar induction. ...
The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. ...
arXiv:2007.15135v1
fatcat:4w5elylabjef5npcnt5dgg67vm
The Return of Lexical Dependencies: Neural Lexicalized PCFGs
2020
Transactions of the Association for Computational Linguistics
., lexicalized PCFGs) have been plagued by sparsity, making them unsuitable for unsupervised grammar induction. ...
However, in this work, we present novel neural models of lexicalized PCFGs that allow us to overcome sparsity problems and effectively induce both constituents and dependencies within a single model. ...
The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. ...
doi:10.1162/tacl_a_00337
fatcat:otrejnbzgfgf5bhxhkqoycfy54
Simple Type-Level Unsupervised POS Tagging
2010
Conference on Empirical Methods in Natural Language Processing
Part-of-speech (POS) tag distributions are known to exhibit sparsity -a word is likely to take a single predominant tag in a corpus. ...
Recent research has demonstrated that incorporating this sparsity constraint improves tagging accuracy. However, in existing systems, this expansion come with a steep increase in model complexity. ...
Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations. ...
dblp:conf/emnlp/LeeHB10
fatcat:mtcghnowcrdq7n6dk2edgthycu
On structured sparsity of phonological posteriors for linguistic parsing
2016
Speech Communication
To verify this hypothesis, we obtain a binary representation of phonological posteriors at the segmental level which is referred to as first-order sparsity structure; the high-order structures are obtained ...
It is then confirmed that the classification of supra-segmental linguistic events, the problem known as linguistic parsing, can be achieved with high accuracy using asimple binary pattern matching of first-order ...
Dependency of Linguistic Events Finally, we test the dependency between different supra-segmental attributes captured in codebook structures. ...
doi:10.1016/j.specom.2016.08.004
fatcat:euwa5efdjza3dcxsppzxpvv5pi
Unsupervised Dependency Parsing with Acoustic Cues
2013
Transactions of the Association for Computational Linguistics
Unsupervised parsing is a difficult task that infants readily perform. ...
We describe how duration information can be incorporated into an unsupervised Bayesian dependency parser whose only other source of information is the words themselves (without punctuation or parts of ...
parsing. 3 Models 2 As mentioned, we will be incorporating word duration into unsupervised dependency parsing, producing analyses like the one in Figure 1 . ...
doi:10.1162/tacl_a_00210
fatcat:7aaz5j4hjzhkhkxyzqe6hwnzbm
A Linguistics-Driven Approach to Statistical Parsing for Low-Resourced Languages
2015
IEICE transactions on information and systems
We then show that covering the most frequent grammar rules via our language parameters has a strong impact on the parsing accuracy in 12 languages. ...
The accuracy of grammar induction is still impractically low because frequent collocations of non-linguistically associable units are commonly found, resulting in dependency attachment errors. ...
Despite their efficiency, all of these unsupervised techniques have one serious drawback for practical statistical parsing: they are prone to dependency attachment errors. ...
doi:10.1587/transinf.2014dap0024
fatcat:iaqli5qjvnhpvgqimhwxizfdjq
Unsupervised Dependency Parsing: Let's Use Supervised Parsers
[article]
2015
arXiv
pre-print
We present a self-training approach to unsupervised dependency parsing that reuses existing supervised and unsupervised parsing algorithms. ...
parsing that are in turn trained on these trees. ...
In unsupervised dependency parsing, starting small is intuitive. ...
arXiv:1504.04666v1
fatcat:mbz5fxllgfcvzehsnpga3knlyq
Unsupervised Dependency Parsing: Let's Use Supervised Parsers
2015
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
We present a self-training approach to unsupervised dependency parsing that reuses existing supervised and unsupervised parsing algorithms. ...
parsing that are in turn trained on these trees. ...
In unsupervised dependency parsing, starting small is intuitive. ...
doi:10.3115/v1/n15-1067
dblp:conf/naacl/LeZ15
fatcat:4x5fntotm5g2bk6wm6tc3p7nt4
Unrolling Loopy Top-Down Semantic Feedback in Convolutional Deep Networks
2014
2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops
In this paper, we propose a novel way to perform topdown semantic feedback in convolutional deep networks for efficient and accurate image parsing. ...
We also show how to add global appearance/semantic features, which have shown to improve image parsing performance in state-ofthe-art methods, and was not present in previous convolutional approaches. ...
The parameters of different deep architectures and classifiers cannot be shared since the deep architectures are trained in an unsupervised way and their input data depends on the output of previous classifier ...
doi:10.1109/cvprw.2014.80
dblp:conf/cvpr/GattaRW14
fatcat:4u53du5ojbcmdh54wmm7ijtg5y
Turkish PoS Tagging by Reducing Sparsity with Morpheme Tags in Small Datasets
[article]
2017
arXiv
pre-print
Results show that using morpheme tags in PoS tagging helps alleviate the sparsity in emission probabilities. ...
We deal with sparsity in Turkish by adopting morphological features for part-of-speech tagging. ...
[9] shows that using inflectional groups as units in Turkish dependency parsing increases the parsing performance. We leave using the contextual information in morpheme tagging as a future work. ...
arXiv:1703.03200v2
fatcat:4oejtwfj7jbnff2y4f2lanwsja
« Previous
Showing results 1 — 15 out of 629 results