Filters








17,798 Hits in 8.6 sec

On the Properties of Neural Machine Translation: Encoder-Decoder Approaches [article]

Kyunghyun Cho and Bart van Merrienboer and Dzmitry Bahdanau and Yoshua Bengio
2014 arXiv   pre-print
In this paper, we focus on analyzing the properties of the neural machine translation using two models; RNN Encoder--Decoder and a newly proposed gated recursive convolutional neural network.  ...  The neural machine translation models often consist of an encoder and a decoder.  ...  Acknowledgments The authors would like to acknowledge the support of the following agencies for research funding and computing support: NSERC, Calcul Québec, Compute Canada, the Canada Research Chairs  ... 
arXiv:1409.1259v2 fatcat:i2l2qmkyfjakdmu7663edwfnaq

On the Properties of Neural Machine Translation: Encoder–Decoder Approaches

Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, Yoshua Bengio
2014 Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation  
In this paper, we focus on analyzing the properties of the neural machine translation using two models; RNN Encoder-Decoder and a newly proposed gated recursive convolutional neural network.  ...  The neural machine translation models often consist of an encoder and a decoder.  ...  Acknowledgments The authors would like to acknowledge the support of the following agencies for research funding and computing support: NSERC, Calcul Québec, Compute Canada, the Canada Research Chairs  ... 
doi:10.3115/v1/w14-4012 dblp:conf/ssst/ChoMBB14 fatcat:zogr4hmywfetnfv4fk3pwho6di

Translate and Label! An Encoder-Decoder Approach for Cross-lingual Semantic Role Labeling [article]

Angel Daza, Anette Frank
2019 arXiv   pre-print
We propose a Cross-lingual Encoder-Decoder model that simultaneously translates and generates sentences with Semantic Role Labeling annotations in a resource-poor target language.  ...  Finally, we measure the effectiveness of our method by using the generated data to augment the training basis for resource-poor languages and perform manual evaluation to show that it produces high-quality  ...  SAS2015-IDS-LWC and by the Ministry of Science, Research, and Art of Baden-Wurttemberg. We thank NVIDIA Corporation for donating GPUs used in this research.  ... 
arXiv:1908.11326v1 fatcat:uequw67mdnernipvnn7hd3xvv4

Translate and Label! An Encoder-Decoder Approach for Cross-lingual Semantic Role Labeling

Angel Daza, Anette Frank
2019 Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)  
We propose a Cross-lingual Encoder-Decoder model that simultaneously translates and generates sentences with Semantic Role Labeling annotations in a resource-poor target language.  ...  Finally, we measure the effectiveness of our method by using the generated data to augment the training basis for resource-poor languages and perform manual evaluation to show that it produces high-quality  ...  SAS2015-IDS-LWC and by the Ministry of Science, Research, and Art of Baden-Wurttemberg. We thank NVIDIA Corporation for donating GPUs used in this research.  ... 
doi:10.18653/v1/d19-1056 dblp:conf/emnlp/DazaF19 fatcat:c2w2hf2ahjayfd6ng4cwblxd4a

An Encoder-Decoder Deep Learning Approach for Multistep Service Traffic Prediction

Theodoros Theodoropoulos
2021 Zenodo  
We propose an encoder-decoder DL approach for multi-step traffic prediction.  ...  The results show that the encoder-decoder architecture provides better accuracy results in regards to predicting the traffic and the duration of the sessions.  ...  Acknowledgment This work is part of the CHARITY project that has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101016509".  ... 
doi:10.5281/zenodo.6963769 fatcat:pvhbbk2g6bdoplenyrh56fasqq

Data-Driven Approach to Encoding and Decoding 3-D Crystal Structures [article]

Jordan Hoffmann, Louis Maestrati, Yoshihide Sawada, Jian Tang, Jean Michel Sellier, Yoshua Bengio
2019 arXiv   pre-print
The first, an Encoder-Decoder pair, constructs a compressed latent space representation of each molecule and then decodes this description into an accurate reconstruction of the input.  ...  Two different neural networks were trained on a dataset of over 120,000 three-dimensional samples of single and repeating crystal structures, made by rotating the single unit cells.  ...  J.H. was supported by the Harvard Faculty of Arts and Sciences Quantitative Biology Initiative.  ... 
arXiv:1909.00949v1 fatcat:fs3zhzths5emjfs5rqt2lmen5y

Can neural machine translation do simultaneous translation? [article]

Kyunghyun Cho, Masha Esipova
2016 arXiv   pre-print
We investigate the potential of attention-based neural machine translation in simultaneous translation.  ...  This paper presents a first step toward building a full simultaneous translation system based on neural machine translation.  ...  The attention-based neural machine translation is built as a composite of three modules-encoder, decoder and attention mechanism.  ... 
arXiv:1606.02012v1 fatcat:2ozdrfjg7zelto4xzsi3ickxmm

Learning to Parse and Translate Improves Neural Machine Translation [article]

Akiko Eriguchi, Yoshimasa Tsuruoka, Kyunghyun Cho
2017 arXiv   pre-print
Our approach encourages the neural machine translation model to incorporate linguistic prior during training, and lets it translate on its own afterward.  ...  Much of the previous work was further constrained to considering linguistic prior on the source side.  ...  Acknowledgments We thank Yuchen Qiao and Kenjiro Taura for their help to speed up the implementations of training and also Kazuma Hashimoto for his valuable comments and discussions.  ... 
arXiv:1702.03525v2 fatcat:ufi4tfnnifcotmswxs3te5bcbq

Learning to Parse and Translate Improves Neural Machine Translation

Akiko Eriguchi, Yoshimasa Tsuruoka, Kyunghyun Cho
2017 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)  
Our approach encourages the neural machine translation model to incorporate linguistic prior during training, and lets it translate on its own afterward.  ...  Much of the previous work was further constrained to considering linguistic prior on the source side.  ...  Acknowledgments We thank Yuchen Qiao and Kenjiro Taura for their help to speed up the implementations of training and also Kazuma Hashimoto for his valuable comments and discussions.  ... 
doi:10.18653/v1/p17-2012 dblp:conf/acl/EriguchiTC17 fatcat:tpdxpuegsvcajav763j3s7pp6i

Zero-Resource Translation with Multi-Lingual Neural Machine Translation

Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T. Yarman Vural, Kyunghyun Cho
2016 Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing  
In this paper, we propose a novel finetuning algorithm for the recently introduced multiway, multilingual neural machine translate that enables zero-resource machine translation.  ...  1) as well as a single-pair neural translation model trained with up to 1M direct parallel sentences of the same language pair and (2) better than pivotbased translation strategy, while keeping only one  ...  This one-to-one strategy works on a source sentence given in one language by taking the encoder of that source language, the decoder of a target language and the shared attention mechanism.  ... 
doi:10.18653/v1/d16-1026 dblp:conf/emnlp/FiratSAYC16 fatcat:s5vme7popvcdbfeuwqdmvcikvu

Zero-Resource Translation with Multi-Lingual Neural Machine Translation [article]

Orhan Firat and Baskaran Sankaran and Yaser Al-Onaizan and Fatos T. Yarman Vural and Kyunghyun Cho
2016 arXiv   pre-print
In this paper, we propose a novel finetuning algorithm for the recently introduced multi-way, mulitlingual neural machine translate that enables zero-resource machine translation.  ...  (1) as well as a single-pair neural translation model trained with up to 1M direct parallel sentences of the same language pair and (2) better than pivot-based translation strategy, while keeping only  ...  This one-to-one strategy works on a source sentence given in one language by taking the encoder of that source language, the decoder of a target language and the shared attention mechanism.  ... 
arXiv:1606.04164v1 fatcat:bzyoswaisnazjja6oqfk2xvgza

Predicting Early Warning Signs of Psychotic Relapse from Passive Sensing Data: An Approach Using Encoder-Decoder Neural Networks (Preprint)

Daniel A Adler, Dror Ben-Zeev, Vincent W-S Tseng, John M Kane, Rachel Brian, Andrew T Campbell, Marta Hauser, Emily A Scherer, Tanzeem Choudhury
2020 JMIR mHealth and uHealth  
We trained 2 types of encoder-decoder neural network models and a clustering-based local outlier factor model to predict behavioral anomalies that occurred within the 30-day period before a participant's  ...  The neural network model architecture and the percentage of relapse participant data used to train all models were varied.  ...  Acknowledgments The CrossCheck system development, deployment, and study was supported by the National Institute of Health's Exceptional Unconventional Research Enabling Knowledge Acceleration program,  ... 
doi:10.2196/19962 pmid:32865506 fatcat:qstd2qfoffdp7ntmmkntqx2deu

Neural Machine Translation Advised by Statistical Machine Translation [article]

Xing Wang, Zhengdong Lu, Zhaopeng Tu, Hang Li, Deyi Xiong, Min Zhang
2016 arXiv   pre-print
Neural Machine Translation (NMT) is a new approach to machine translation that has made great progress in recent years.  ...  More specifically, at each decoding step, SMT offers additional recommendations of generated words based on the decoding information from NMT (e.g., the generated partial translation and attention history  ...  Acknowledgments This work was supported by the National Natural Science Foundation of China (Grants No.61525205, 61432013 and 61622209).  ... 
arXiv:1610.05150v2 fatcat:cfrxwnrgmjcx5g6nxewe3pj7fq

Neural Machine Translation [article]

Philipp Koehn
2017 arXiv   pre-print
Draft of textbook chapter on neural machine translation. a comprehensive treatment of the topic, ranging from introduction to neural networks, computation graphs, description of the currently dominant  ...  Written as chapter for the textbook Statistical Machine Translation. Used in the JHU Fall 2017 class on machine translation.  ...  Encoder-Decoder Approach Our first stab at a neural translation model is a straightforward extension of the language model.  ... 
arXiv:1709.07809v1 fatcat:kj23sup7yfaxvllfha4v7xbugq

Non-Autoregressive Neural Machine Translation [article]

Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K. Li, Richard Socher
2018 arXiv   pre-print
Existing approaches to neural machine translation condition each output word on previously generated outputs.  ...  We demonstrate substantial cumulative improvements associated with each of the three aspects of our training strategy, and validate our approach on IWSLT 2016 English-German and two WMT language pairs.  ...  on top of the output of the last encoder layer.  ... 
arXiv:1711.02281v2 fatcat:ukzvkpbe2zfunm5c2ffvf7s7om
« Previous Showing results 1 — 15 out of 17,798 results