Filters








7,750 Hits in 5.5 sec

Iterative Dual Domain Adaptation for Neural Machine Translation [article]

Jiali Zeng, Yang Liu, Jinsong Su, Yubin Ge, Yaojie Lu, Yongjing Yin, Jiebo Luo
2019 arXiv   pre-print
To this end, we propose an iterative dual domain adaptation framework for NMT.  ...  Previous studies on the domain adaptation for neural machine translation (NMT) mainly focus on the one-pass transferring out-of-domain translation knowledge to in-domain NMT model.  ...  We also thank the reviewers for their insightful comments  ... 
arXiv:1912.07239v1 fatcat:slkauogrcvdijbtcy5abzzvwjq

Iterative Dual Domain Adaptation for Neural Machine Translation

Jiali Zeng, Yang Liu, jinsong su, yubing Ge, Yaojie Lu, Yongjing Yin, jiebo luo
2019 Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)  
To this end, we propose an iterative dual domain adaptation framework for NMT.  ...  Previous studies on the domain adaptation for neural machine translation (NMT) mainly focus on the one-pass transferring out-ofdomain translation knowledge to in-domain NMT model.  ...  We also thank the reviewers for their insightful comments  ... 
doi:10.18653/v1/d19-1078 dblp:conf/emnlp/ZengLSGLYL19 fatcat:5vejlipwazb75aqds6nagkqw6y

Unsupervised Neural Machine Translation for Low-Resource Domains via Meta-Learning [article]

Cheonbok Park, Yunwon Tae, Taehee Kim, Soyoung Yang, Mohammad Azam Khan, Eunjeong Park, Jaegul Choo
2021 arXiv   pre-print
To address this issue, this paper presents a novel meta-learning algorithm for unsupervised neural machine translation (UNMT) that trains the model to adapt to another domain by utilizing only a small  ...  Unsupervised machine translation, which utilizes unpaired monolingual corpora as training data, has achieved comparable performance against supervised machine translation.  ...  An empirical comparison of domain adaptation methods for neural machine translation.  ... 
arXiv:2010.09046v2 fatcat:2iquah5dufgwfdnzl2kvq7gkl4

A Study on the Intelligent Translation Model for English Incorporating Neural Network Migration Learning

Yanbo Zhang, Xin Ning
2021 Wireless Communications and Mobile Computing  
problem of neural networks during training and to improve the generalization ability of end-to-end neural network machine translation models under low-resource conditions.  ...  on a supervised algorithm; then, for machine translation tasks with parallel corpus data resource-poor language machine translation tasks, migration learning techniques are used to prevent the overfitting  ...  However, this domain-adaptive migration learning approach tends to lead to overfitting problems when training neural network machine translation models and it is difficult to converge during training,  ... 
doi:10.1155/2021/1244389 fatcat:zk7u6zm5dvcvraxtqa3bpxh4wy

Explaining and Generalizing Back-Translation through Wake-Sleep [article]

Ryan Cotterell, Julia Kreutzer
2018 arXiv   pre-print
Back-translation has become a commonly employed heuristic for semi-supervised neural machine translation. The technique is both straightforward to apply and has led to state-of-the-art results.  ...  to a single iteration of the wake-sleep algorithm in our proposed model.  ...  For TED, each side of the training data (153k sentences) serves as monotext for semi-supervised domain adaptation via back-translation. Machine Translation Model.  ... 
arXiv:1806.04402v1 fatcat:3b466l3lrzdfdgshowstcmt3ma

Self-Learning for Zero Shot Neural Machine Translation [article]

Surafel M. Lakew, Matteo Negri, Marco Turchi
2021 arXiv   pre-print
Neural Machine Translation (NMT) approaches employing monolingual data are showing steady improvements in resource rich conditions.  ...  Compared to unsupervised NMT, consistent improvements are observed even in a domain-mismatch setting, attesting to the usability of our method.  ...  "Adapting Multilingual Neural Machine Translation to Unseen Languages". In 16th International Work- shop on Spoken Language Translation (IWSLT), Hong Kong.  ... 
arXiv:2103.05951v1 fatcat:ccpndv5rwbhj5e5jli2omwb7y4

Bandit Structured Prediction for Neural Sequence-to-Sequence Learning [article]

Julia Kreutzer, Artem Sokolov, Stefan Riezler
2018 arXiv   pre-print
We present an evaluation on a neural machine translation task that shows improvements of up to 5.89 BLEU points for domain adaptation from simulated bandit feedback.  ...  We advance this framework by lifting linear bandit learning to neural sequence-to-sequence learning problems using attention-based recurrent neural networks.  ...  Neural Machine Translation Neural models for machine translation are based on a sequence-to-sequence learning architecture consisting of an encoder and a decoder Bahdanau et al., 2015) .  ... 
arXiv:1704.06497v2 fatcat:whu77suxtnf6vggpkuzdyhs7sq

Bandit Structured Prediction for Neural Sequence-to-Sequence Learning

Julia Kreutzer, Artem Sokolov, Stefan Riezler
2017 Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)  
We present an evaluation on a neural machine translation task that shows improvements of up to 5.89 BLEU points for domain adaptation from simulated bandit feedback.  ...  We advance this framework by lifting linear bandit learning to neural sequence-to-sequence learning problems using attention-based recurrent neural networks.  ...  Neural Machine Translation Neural models for machine translation are based on a sequence-to-sequence learning architecture consisting of an encoder and a decoder Bahdanau et al., 2015) .  ... 
doi:10.18653/v1/p17-1138 dblp:conf/acl/KreutzerSR17 fatcat:7xuqsxkicjcd7kmb7iduyf2f3e

CUED@WMT19:EWC&LMs

Felix Stahlberg, Danielle Saunders, Adrià de Gispert, Bill Byrne
2019 Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)  
Fine-tuning is often used to adapt a model to a new domain (Luong and Manning, 2015) , while ensembling neural machine translation (NMT) with neural language models (LMs) is an effective way to leverage  ...  First, we implemented dual cross-entropy filtering (Junczys-Dowmunt, 2018a,b), a sophisticated data selection criterion based on neural language model and neural machine translation model scores in both  ... 
doi:10.18653/v1/w19-5340 dblp:conf/wmt/StahlbergSGB19 fatcat:6djejanvgrhzvi2rm62brnoami

Adaptive Language Processing Based on Deep Learning in Cloud Computing Platform

Wenbin Xu, Chengbo Yin
2020 Complexity  
These data are undoubtedly a great asset for statistical machine translation research.  ...  Finally, the adaptive method of massive corpus filtering and statistical machine translation based on cloud platform is verified.  ...  translation system has also achieved good results. e traditional statistical machine translation domain adaptive method usually migrates the model for a single domain.  ... 
doi:10.1155/2020/5828130 fatcat:ygls6pzwc5emrmupt6ml2tppau

Tilde's Machine Translation Systems for WMT 2019

Marcis Pinnis, Rihards Krišlauks, Matīss Rikters
2019 Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)  
The paper describes the development process of Tilde's NMT systems for the WMT 2019 shared task on news translation.  ...  We also present a new method to ensure source domain adherence in back-translated data. Our systems achieved a shared first place in human evaluation.  ...  We thank the High Performance Computing Center of Riga Technical University for providing access to their GPU computing infrastructure.  ... 
doi:10.18653/v1/w19-5335 dblp:conf/wmt/PinnisKR19 fatcat:je4s52poizdlvnmbjabiclezwm

Johns Hopkins University Submission for WMT News Translation Task

Kelly Marchisio, Yash Kumar Lal, Philipp Koehn
2019 Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)  
The systems combine multiple techniques -sampling, filtering, iterative backtranslation, and continued training -previously used to improve performance of neural machine translation models.  ...  We describe the work of Johns Hopkins University for the shared task of news translation organized by the Fourth Conference on Machine Translation (2019).  ...  We also thank our anonymous reviewers for their helpful comments.  ... 
doi:10.18653/v1/w19-5329 dblp:conf/wmt/MarchisioLK19 fatcat:2emb7a3esbbwznu4stasggq7iu

CtlGAN: Few-shot Artistic Portraits Generation with Contrastive Transfer Learning [article]

Yue Wang, Ran Yi, Ying Tai, Chengjie Wang, Lizhuang Ma
2022 arXiv   pre-print
We adapt a pretrained StyleGAN in the source domain to a target artistic domain with no more than 10 artistic faces.  ...  We propose a new encoder which embeds real faces into Z+ space and proposes a dual-path training strategy to better cope with the adapted decoder and eliminate the artifacts.  ...  We train 5000 iterations for Sketches domain, 3000 iterations for Raphael domain and Caricature domains, 2000 iterations for Sunglasses domain, 1250 iterations for Roy Lichtenstein domain, and 1000 iterations  ... 
arXiv:2203.08612v1 fatcat:eqrot5yew5hd5iftuqofdqftpe

Iterative Domain-Repaired Back-Translation [article]

Hao-Ran Wei, Zhirui Zhang, Boxing Chen, Weihua Luo
2020 arXiv   pre-print
To address this issue, we propose a novel iterative domain-repaired back-translation framework, which introduces the Domain-Repair (DR) model to refine translations in synthetic bilingual data.  ...  One common and effective strategy for this case is exploiting in-domain monolingual data with the back-translation method.  ...  Acknowledgments We would like to thank the anonymous reviewers for the helpful comments. This work is supported by National Key R&D Program of China (2018YFB1403202).  ... 
arXiv:2010.02473v1 fatcat:yew2m44xd5eexkl62djqe46hve

Stable Distribution Alignment Using the Dual of the Adversarial Distance [article]

Ben Usman, Kate Saenko, Brian Kulis
2018 arXiv   pre-print
We test our hypothesis on the problem of aligning two synthetic point clouds on a plane and on a real-image domain adaptation problem on digits.  ...  In both cases, the dual formulation yields an iterative procedure that gives more stable and monotonic improvement over time.  ...  Related ideas have been proposed for unsupervised domain adaptation.  ... 
arXiv:1707.04046v4 fatcat:5qgicm6oqvbflntxx5kvb7pj4u
« Previous Showing results 1 — 15 out of 7,750 results