Filters








103,288 Hits in 7.3 sec

On the State of the Art of Evaluation in Neural Language Models [article]

Gábor Melis, Chris Dyer, Phil Blunsom
2017 arXiv   pre-print
Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks.  ...  We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.  ...  In Table 2 ENWIK8 In contrast to the previous datasets, our numbers on this task (reported in BPC, following convetion) are slightly off the state of the art.  ... 
arXiv:1707.05589v2 fatcat:ozrvqn2wgjgt3fon25jhqnosne

An In-depth Walkthrough on Evolution of Neural Machine Translation [article]

Rohan Jagtap, Dr. Sudhir N. Dhage
2020 arXiv   pre-print
This paper aims to study the major trends in Neural Machine Translation, the state of the art models in the domain and a high level comparison between them.  ...  Neural Machine Translation (NMT) methodologies have burgeoned from using simple feed-forward architectures to the state of the art; viz. BERT model.  ...  The state of the art Language Modelling concepts were engendered and disseminated with Neural Networks.  ... 
arXiv:2004.04902v1 fatcat:giua7w4y4bh3pbucubmh43mlc4

Progress and Tradeoffs in Neural Language Models [article]

Raphael Tang, Jimmy Lin
2018 arXiv   pre-print
We compare state-of-the-art NLMs with "classic" Kneser-Ney (KN) LMs in terms of energy usage, latency, perplexity, and prediction accuracy using two standard benchmarks.  ...  Undoubtedly, neural language models (NLMs) have reduced perplexity by impressive amounts.  ...  Quasirecurrent neural networks (QRNNs; achieve current state of the art in word-level language modeling (Merity et al., 2018a) .  ... 
arXiv:1811.00942v1 fatcat:h2e3nyv2y5eu7e4oniuepy65ty

Deep Affix Features Improve Neural Named Entity Recognizers

Vikas Yadav, Rebecca Sharp, Steven Bethard
2018 Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics  
Additionally, we show improvement on SemEval 2013 task 9.1 DrugNER, achieving state of the art results on the MedLine dataset and the second best results overall (-1.3% from state of the art).  ...  1.5-2.3 percent over the state of the art without relying on any dictionary features.  ...  Morphological features were highly effective in named entity recognizers before neural networks became the new state-of-the-art.  ... 
doi:10.18653/v1/s18-2021 dblp:conf/starsem/YadavSB18 fatcat:4rekxxh5sve5ra3eai4got7fnm

Introduction to the special issue on deep learning approaches for machine translation

Marta R. Costa-jussà, Alexandre Allauzen, Loïc Barrault, Kyunghun Cho, Holger Schwenk
2017 Computer Speech and Language  
This introduction covers all topics contained in the papers included in this special issue, which basically are: integration of deep learning in statistical MT; development of the end-to-end neural MT  ...  system; and introduction of deep learning in interactive MT and MT evaluation.  ...  Acknowledgements The work of the 1st author is supported by the Spanish Ministerio de Economía y  ... 
doi:10.1016/j.csl.2017.03.001 fatcat:qrep7sdnurfvnmoogz4hssnnh4

TransQuest: Translation Quality Estimation with Cross-lingual Transformers [article]

Tharindu Ranasinghe, Constantin Orasan, Ruslan Mitkov
2020 arXiv   pre-print
Our evaluation shows that the proposed methods achieve state-of-the-art results outperforming current open-source quality estimation frameworks when trained on datasets from WMT.  ...  However, the majority of these methods work only on the language pair they are trained on and need retraining for new language pairs.  ...  state-of-the-art quality estimation methods in low-resource language pairs. 4.  ... 
arXiv:2011.01536v2 fatcat:woigmbwhqrbxlne2sbancgvrwa

Characterizing the hyper-parameter space of LSTM language models for mixed context applications [article]

Victor Akinwande, Sekou L. Remy
2017 arXiv   pre-print
Applying state of the art deep learning models to novel real world datasets gives a practical evaluation of the generalizability of these models.  ...  We present work to characterize the hyper parameter space of an LSTM for language modeling on a code-mixed corpus.  ...  The effect of this would be that reproducing state of the art neural models on a unique dataset would require significant hyper parameter search thus limiting the reach of these models to parties with  ... 
arXiv:1712.03199v1 fatcat:oz3pmfvvk5eixaggbs2ynojcgy

Connecting Language and Knowledge with Heterogeneous Representations for Neural Relation Extraction [article]

Peng Xu, Denilson Barbosa
2019 arXiv   pre-print
We help close the gap with a framework that unifies the learning of RE and KBE models leading to significant improvements over the state-of-the-art in RE.  ...  For general purpose KBs, this is often done through Relation Extraction (RE), the task of predicting KB relations expressed in text mentioning entities known to the KB.  ...  Acknowledgments This work was supported in part by grants from the Natural Sciences and Engineering Research Council of Canada and a gift from Diffbot Inc.  ... 
arXiv:1903.10126v3 fatcat:hwseki6cxrgatlrlmh7px5jsna

Improving Named Entity Recognition for Morphologically Rich Languages Using Word Embeddings

Hakan Demir, Arzucan Ozgur
2014 2014 13th International Conference on Machine Learning and Applications  
Unlike the previous state-ofthe-art systems developed for these languages, our system does not make use of any language dependent features.  ...  In this paper, we addressed the Named Entity Recognition (NER) problem for morphologically rich languages by employing a semi-supervised learning approach based on neural networks.  ...  State-of-the-art systems developed for such languages usually depend on manually designed language specific features that utilize the rich morphological structures of the words.  ... 
doi:10.1109/icmla.2014.24 dblp:conf/icmla/DemirO14 fatcat:falw4ef5cngm7iey4wizocgjly

An Extensive Empirical Evaluation of Character-Based Morphological Tagging for 14 Languages

Georg Heigold, Guenter Neumann, Josef van Genabith
2017 Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers  
We evaluate on 14 languages and observe consistent gains over a state-of-the-art morphological tagger across all languages except for English and French, where we match the state-of-the-art.  ...  This paper investigates neural characterbased morphological tagging for languages with complex morphology and large tag sets.  ...  Acknowledgment This work has been partly funded by the European Unions Horizon 2020 research and innovation programme under grant agreement No. 645452 (QT21).  ... 
doi:10.18653/v1/e17-1048 dblp:conf/eacl/GenabithHN17 fatcat:mmvxvhzm7vbolmjy5zskjw5m3i

Neural Task Representations as Weak Supervision for Model Agnostic Cross-Lingual Transfer [article]

Sujay Kumar Jauhar, Michael Gamon, Patrick Pantel
2018 arXiv   pre-print
On a battery of tests, we show that our models outperform a number of strong baselines and rival state-of-the-art results, which rely on more complex approaches and significantly more resources and data  ...  Yet, the task of transferring a model from one language to another can be expensive in terms of annotation costs, engineering time and effort.  ...  state-of-the-art on one language, while relying on a much simpler method and requiring significantly fewer resources.  ... 
arXiv:1811.01115v1 fatcat:d4axa3dbfnfuhizpbix2ngyx24

IndicSpeech: Text-to-Speech Corpus for Indian Languages

Nimisha Srivastava, Rudrabha Mukhopadhyay, Prajwal K. R, C. V. Jawahar
2020 International Conference on Language Resources and Evaluation  
In this work, we also train a state-of-the-art TTS system for each of these languages and report their performances. The collected corpus, code, and trained models are made publicly available.  ...  We believe that one of the major reasons for this is the lack of large, publicly available text-to-speech corpora in these languages that are suitable for training neural text-to-speech systems.  ...  We see that the corpus consists of a diverse vocabulary and is at a scale well-suited for state-of-the-art neural TTS models.  ... 
dblp:conf/lrec/SrivastavaMRJ20 fatcat:ttzc6v7pxnedhb3yoaj4kmlcdi

Neural Generation of Regular Expressions from Natural Language with Minimal Domain Knowledge [article]

Nicholas Locascio, Karthik Narasimhan, Eduardo DeLeon, Nate Kushman, Regina Barzilay
2016 arXiv   pre-print
Our resulting model achieves a performance gain of 19.6% over previous state-of-the-art models.  ...  To fully explore the potential of neural models, we propose a methodology for collecting a large corpus of regular expression, natural language pairs.  ...  Despite the small size of KB13, our model achieves state-of-the-art results on this very resource-constrained dataset (814 examples).  ... 
arXiv:1608.03000v1 fatcat:kg3i4y56nrboxcznzl635ywuky

Dynamic Evaluation of Neural Sequence Models [article]

Ben Krause, Emmanuel Kahembwe, Iain Murray, Steve Renals
2017 arXiv   pre-print
Dynamic evaluation improves the state-of-the-art word-level perplexities on the Penn Treebank and WikiText-2 datasets to 51.1 and 44.3 respectively, and the state-of-the-art character-level cross-entropies  ...  We present methodology for using dynamic evaluation to improve neural sequence models.  ...  Neural caching has recently been used to improve the state-of-the-art at word-level language modelling (Merity et al., 2017a) .  ... 
arXiv:1709.07432v2 fatcat:jvzrw46qkfaltaynxk7mlibriu

Tree-to-tree Neural Networks for Program Translation [article]

Xinyun Chen, Chang Liu, Dawn Song
2018 arXiv   pre-print
We evaluate the program translation capability of our tree-to-tree model against several state-of-the-art approaches.  ...  Further, our approach can improve the previous state-of-the-art program translation approaches by a margin of 20 points on the translation of real-world projects.  ...  Acknowledgement We thank the anonymous reviewers for their valuable comments. This material is in part  ... 
arXiv:1802.03691v3 fatcat:k2ew6jncj5bepancofv337yg2y
« Previous Showing results 1 — 15 out of 103,288 results