Filters








596 Hits in 9.9 sec

Incorporating Syntactic Uncertainty in Neural Machine Translation with Forest-to-Sequence Model [article]

Poorya Zaremoodi, Gholamreza Haffari
2017 arXiv   pre-print
Incorporating syntactic information in Neural Machine Translation models is a method to compensate their requirement for a large amount of parallel training text, especially for low-resource language pairs  ...  In this paper, we propose a forest-to-sequence Attentional Neural Machine Translation model to make use of exponentially many parse trees of the source sentence to compensate for the parser errors.  ...  We would like to thank Wray Buntine and Bin Li for fruitful discussions.  ... 
arXiv:1711.07019v2 fatcat:6rtemkbktfbejbo45oojwc67jy

Syntax-based Transformer for Neural Machine Translation

Chunpeng Ma, Akihiro Tamura, Masao Utiyama, Eiichiro Sumita, Tiejun Zhao
2020 Journal of Natural Language Processing  
Our method is general in that it is applicable to both constituent trees and packed forests.  ...  Syntactic information can be used in various models, including models in statistical machine translation (Mi and Huang 2008), RNN-based NMT (Kuncoro, Dyer, Hale, Yogatama, Clark, and Blunsom 2018) and  ...  "Incorporating Syntactic Uncertainty in Neural Machine Translation with a Forest-to-Sequence Model."  ... 
doi:10.5715/jnlp.27.445 fatcat:sm2etlr6tvdyhasxymtnfwjc4e

A Survey of Machine Learning for Big Code and Naturalness

Miltiadis Allamanis, Earl T. Barr, Premkumar Devanbu, Charles Sutton
2018 ACM Computing Surveys  
We present a taxonomy based on the underlying design principles of each model and use it to navigate the literature.  ...  Research at the intersection of machine learning, programming languages, and software engineering has recently taken important steps in proposing learnable probabilistic models of source code that exploit  ...  But a neural network can exploit it, by learning to assign these two sequences similar vectors. Syntactic Models (trees).  ... 
doi:10.1145/3212695 fatcat:iuuocyctg5adjmobhc2zw23rfu

Exploiting Acoustic and Syntactic Features for Automatic Prosody Labeling in a Maximum Entropy Framework

V.K. Rangarajan Sridhar, S. Bangalore, S.S. Narayanan
2008 IEEE Transactions on Audio, Speech, and Language Processing  
Our framework utilizes novel syntactic features in the form of supertags and a quantized acoustic-prosodic feature representation that is similar to linear parameterizations of the prosodic contour.  ...  The reported results are significantly better than previously reported results and demonstrate the strength of maximum entropy model in jointly modeling simple lexical, syntactic, and acoustic features  ...  We are also working on incorporating our automatic prosody labeler in a speech-to-speech translation framework.  ... 
doi:10.1109/tasl.2008.917071 pmid:19603083 pmcid:PMC2709295 fatcat:4khywh3dcrf4nbx3do6mhjlu6a

A Survey of Machine Learning for Big Code and Naturalness [article]

Miltiadis Allamanis, Earl T. Barr, Premkumar Devanbu, Charles Sutton
2018 arXiv   pre-print
We present a taxonomy based on the underlying design principles of each model and use it to navigate the literature.  ...  Research at the intersection of machine learning, programming languages, and software engineering has recently taken important steps in proposing learnable probabilistic models of source code that exploit  ...  But a neural network can exploit it, by learning to assign these two sequences similar vectors. Syntactic Models.  ... 
arXiv:1709.06182v2 fatcat:hbvgyonqsjgq3nqwji6jf3aybe

A Simple and Accurate Syntax-Agnostic Neural Model for Dependency-based Semantic Role Labeling [article]

Diego Marcheggiani, Anton Frolov, Ivan Titov
2017 arXiv   pre-print
We introduce a simple and accurate neural model for dependency-based semantic role labeling. Our model predicts predicate-argument dependencies relying on states of a bidirectional LSTM encoder.  ...  Syntactic parsers are unreliable on out-of-domain data, so standard (i.e., syntactically-informed) SRL models are hindered when tested in this setting.  ...  The authors would like to thank Michael Roth for his helpful suggestions.  ... 
arXiv:1701.02593v2 fatcat:6stapn4sojej3lhg373wc5mbau

A Simple and Accurate Syntax-Agnostic Neural Model for Dependency-based Semantic Role Labeling

Diego Marcheggiani, Anton Frolov, Ivan Titov
2017 Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017)  
We introduce a simple and accurate neural model for dependency-based semantic role labeling. Our model predicts predicate-argument dependencies relying on states of a bidirectional LSTM encoder.  ...  Syntactic parsers are unreliable on out-of-domain data, so standard (i.e., syntactically-informed) SRL models are hindered when tested in this setting.  ...  The authors would like to thank Michael Roth for his helpful suggestions.  ... 
doi:10.18653/v1/k17-1041 dblp:conf/conll/MarcheggianiFT17 fatcat:5xvvjbq7njfqzh5hkf4goaalpi

Rule Augmented Unsupervised Constituency Parsing [article]

Atul Sahay, Anshul Nasery, Ayush Maheshwari, Ganesh Ramakrishnan, Rishabh Iyer
2021 arXiv   pre-print
Recently, unsupervised parsing of syntactic trees has gained considerable attention. A prototypical approach to such unsupervised parsing employs reinforcement learning and auto-encoders.  ...  We propose an approach that utilizes very generic linguistic knowledge of the language present in the form of syntactic rules, thus inducing better syntactic structures.  ...  Ayush Maheshwari is supported by a Fellowship from Ekal Foundation (www.ekal.org).  ... 
arXiv:2105.10193v1 fatcat:lol3n6ia6zgxhmedznw5lf4dnm

Diving Deep into Deep Learning:History, Evolution, Types and Applications

2020 VOLUME-8 ISSUE-10, AUGUST 2019, REGULAR ISSUE  
While machine learning is busy in supervised and unsupervised methods, deep learning continues its motivation for replicating the human nervous system by incorporating advanced types of Neural Networks  ...  This paper provides an introductory tutorial to the domain of deep learning with its history, evolution, and introduction to some of the sophisticated neural networks such as Convolutional Neural Network  ...  ACKNOWLEDGMENT The authors would like to thank REVA University for providing the necessary facility to carry out the research work.  ... 
doi:10.35940/ijitee.a4865.019320 fatcat:orn2asvoxfaxvlc5iv7kec4nm4

Sampling-Based Minimum Bayes Risk Decoding for Neural Machine Translation [article]

Bryan Eikema, Wilker Aziz
2021 arXiv   pre-print
In neural machine translation (NMT), we search for the mode of the model distribution to form predictions.  ...  The mode as well as other high probability translations found by beam search have been shown to often be inadequate in a number of ways.  ...  BEER (Stanojević and Sima'an, 2014 ) is a character-based metric that has shown to correlate well with human judgements in many WMT metrics tasks (Macháček and Bojar, 2014; Bojar et al., 2016b).  ... 
arXiv:2108.04718v1 fatcat:t3noapnzo5fatcgqslvzbvnj6a

Neural Machine Translation: A Review and Survey [article]

Felix Stahlberg
2020 arXiv   pre-print
with a single neural network.  ...  Statistical MT, which mainly relies on various count-based models and which used to dominate MT research for decades, has largely been superseded by neural machine translation (NMT), which tackles translation  ...  Recurrent neural network grammars that represent syntactic parse trees as sequence of actions were applied to machine translation by ; Eriguchi et al. (2017) .  ... 
arXiv:1912.02047v2 fatcat:ih4irwghsze5plbtmcbrlaqoby

Opportunities And Obstacles For Deep Learning In Biology And Medicine [article]

Travers Ching, Daniel S. Himmelstein, Brett K. Beaulieu-Jones, Alexandr A. Kalinin, Brian T. Do, Gregory P. Way, Enrico Ferrero, Paul-Michael Agapow, Michael Zietz, Michael M Hoffman, Wei Xie, Gail L. Rosen (+24 others)
2017 bioRxiv   pre-print
Deep learning, which describes a class of machine learning algorithms, has recently showed impressive results across a variety of domains.  ...  More work is needed to address concerns related to interpretability and how to best model each problem.  ...  We would like to thank Anna Greene for a careful proofreading of the manuscript in advance of the first submission.  ... 
doi:10.1101/142760 fatcat:l7zvbtbgxjamtd735vir2trw6q

Opportunities and obstacles for deep learning in biology and medicine

Travers Ching, Daniel S. Himmelstein, Brett K. Beaulieu-Jones, Alexandr A. Kalinin, Brian T. Do, Gregory P. Way, Enrico Ferrero, Paul-Michael Agapow, Michael Zietz, Michael M. Hoffman, Wei Xie, Gail L. Rosen (+24 others)
2018 Journal of the Royal Society Interface  
Though progress has been made linking a specific neural network's prediction to input features, understanding how users should interpret these models to make testable hypotheses about the system under  ...  Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features.  ...  We thank Aaron Sheldon, who contributed text but did not formally approve the manuscript; Anna Greene for a careful proofreading of the manuscript in advance of the first submission; Sebastian Raschka  ... 
doi:10.1098/rsif.2017.0387 pmid:29618526 pmcid:PMC5938574 fatcat:65o4xmp53nc6zmj37srzuht6tq

A Tale of Two Perplexities: Sensitivity of Neural Language Models to Lexical Retrieval Deficits in Dementia of the Alzheimer's Type [article]

Trevor Cohen, Serguei Pakhomov
2020 arXiv   pre-print
In recent years there has been a burgeoning interest in the use of computational methods to distinguish between elicited speech samples produced by patients with dementia, and those from healthy controls  ...  We find that perplexity of neural LMs is strongly and differentially associated with lexical frequency, and that a mixture model resulting from interpolating control and dementia LMs improves upon the  ...  Neural LM perplexity Recurrent neural network language models (RNN-LM) (Mikolov et al., 2010) are widely used in machine translation and other applications such as sequence labeling (Goldberg, 2016)  ... 
arXiv:2005.03593v2 fatcat:ievtfl6lsfbqbezqu6r6qrmywi

Message from the general chair

Benjamin C. Lee
2015 2015 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS)  
To inject knowledge, we use a state-of-the-art system which cross-links (or "grounds") expressions in free text to Wikipedia.  ...  We propose a joint learning model which combines pairwise classification and mention clustering with Markov logic.  ...  We treat the problem as a sequence labelling task, which allows us to incorporate sequence features without using gold standard information.  ... 
doi:10.1109/ispass.2015.7095776 dblp:conf/ispass/Lee15 fatcat:ehbed6nl6barfgs6pzwcvwxria
« Previous Showing results 1 — 15 out of 596 results