Filters








88,278 Hits in 5.6 sec

Pre-training Multilingual Neural Machine Translation by Leveraging Alignment Information [article]

Zehui Lin, Xiao Pan, Mingxuan Wang, Xipeng Qiu, Jiangtao Feng, Hao Zhou, Lei Li
2021 arXiv   pre-print
We carry out extensive experiments on 42 translation directions across a diverse settings, including low, medium, rich resource, and as well as transferring to exotic language pairs.  ...  We propose mRASP, an approach to pre-train a universal multilingual neural machine translation model.  ...  We would also like to thank Liwei Wu, Huadong Chen, Qianqian Dong, Zewei Sun, and Weiying Ma for their useful suggestion and help with experiments.  ... 
arXiv:2010.03142v3 fatcat:v5zcixzrurecnnmlaewkmxzg6u

Universal Conditional Masked Language Pre-training for Neural Machine Translation [article]

Pengfei Li, Liangyou Li, Meng Zhang, Minghao Wu, Qun Liu
2022 arXiv   pre-print
Pre-trained sequence-to-sequence models have significantly improved Neural Machine Translation (NMT).  ...  Different from prior works where pre-trained models usually adopt an unidirectional decoder, this paper demonstrates that pre-training a sequence-to-sequence model but with a bidirectional decoder can  ...  Acknowledgments We would like to thank anonymous reviewers for their helpful feedback. we also thank Wenyong Huang, Lu Hou, Yinpeng Guo, Guchun Zhang for their useful suggestion and help with experiments  ... 
arXiv:2203.09210v2 fatcat:lae2gxxv5zh7zbru4kt76ugsqi

XLM-T: Scaling up Multilingual Machine Translation with Pretrained Cross-lingual Transformer Encoders [article]

Shuming Ma, Jian Yang, Haoyang Huang, Zewen Chi, Li Dong, Dongdong Zhang, Hany Hassan Awadalla, Alexandre Muzio, Akiko Eriguchi, Saksham Singhal, Xia Song, Arul Menezes (+1 others)
2020 arXiv   pre-print
Multilingual machine translation enables a single model to translate between different languages.  ...  In this work, inspired by the recent success of language model pre-training, we present XLM-T, which initializes the model with an off-the-shelf pretrained cross-lingual Transformer encoder and fine-tunes  ...  We further combine the parallel Table 3 : X → En and En → X test BLEU for high/medium/low resource language pairs in many-to-many setting on OPUS-100 test sets.  ... 
arXiv:2012.15547v1 fatcat:j52fkqqadzdlbohlkenwqndimu

Revisiting Modularized Multilingual NMT to Meet Industrial Demands [article]

Sungwon Lyu, Bokyung Son, Kichang Yang, Jaekyoung Bae
2020 arXiv   pre-print
In this study, we revisit the multilingual neural machine translation model that only share modules among the same languages (M2) as a practical alternative to 1-1 to satisfy industrial requirements.  ...  The zero-shot performance of the added modules is even comparable to supervised models. Our findings suggest that the M2 can be a competent candidate for multilingual translation in industries.  ...  Zero-resource translation with multi-lingual neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Lan- guage Processing, pages 268-277.  ... 
arXiv:2010.09402v1 fatcat:diar2fvmsreidevpasfy6ewlqy

Examining Scaling and Transfer of Language Model Architectures for Machine Translation [article]

Biao Zhang, Behrooz Ghorbani, Ankur Bapna, Yong Cheng, Xavier Garcia, Jonathan Shen, Orhan Firat
2022 arXiv   pre-print
In machine translation, EncDec has long been the favoured approach, but with few studies investigating the performance of LMs.  ...  Our results show that: (i) Different LMs have different scaling properties, where architectural differences often have a significant impact on model performance at small scales, but the performance gap  ...  ) tasks (Brown et al., 2020; Raffel et al., However, in neural machine translation (NMT), EncDec has been the dominant paradigm across all translation tasks (e.g. high/low-resource, multilingual and  ... 
arXiv:2202.00528v3 fatcat:jlzm5kxssvamzmh2a3g43oknya

Building a New Sentiment Analysis Dataset for Uzbek Language and Creating Baseline Models

Elmurod Kuriyozov, Sanatbek Matlatipov
2019 Proceedings (MDPI)  
Our methodology considers collecting a medium-size manually annotated dataset and a larger-size dataset automatically translated from existing resources.  ...  Then, we use these datasets to train sentiment analysis models on the Uzbek language, using both traditional machine learning techniques and recent deep learning models.  ...  The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.  ... 
doi:10.3390/proceedings2019021037 fatcat:f3qrtcsjqffx5fv7lybmnaow7u

Multilingual Denoising Pre-training for Neural Machine Translation

Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer
2020 Transactions of the Association for Computational Linguistics  
Pre-training a complete model allows it to be directly fine-tuned for supervised (both sentence-level and document-level) and unsupervised machine translation, with no task- specific modifications.  ...  We demonstrate that adding mBART initialization produces performance gains in all but the highest-resource settings, including up to 12 BLEU points for low resource MT and over 5 BLEU points for many document-level  ...  Acknowledgments We thank Marc'Aurelio Ranzato, Guillaume Lample, Alexis Conneau, and Michael Auli for sharing their expertise on low-resource and unsupervised machine translation and Peng-Jen Chen and  ... 
doi:10.1162/tacl_a_00343 fatcat:ktmh3drlhzgf3k3oo45e6ztbie

Machine Translation Pre-training for Data-to-Text Generation – A Case Study in Czech [article]

Mihir Kale, Scott Roy
2020 arXiv   pre-print
Based on our experiments on Czech, a morphologically complex language, we find that pre-training lets us train end-to-end models with significantly improved performance, as judged by automatic metrics  ...  In this paper, we study the effectiveness of machine translation based pre-training for data-to-text generation in non-English languages.  ...  Acknowledgments We would like to thank Markus Freitag for insightful discussions and Ondřej Dušek for providing the tgen-sota model outputs.  ... 
arXiv:2004.02077v1 fatcat:t5crlxws5vayllrge3vb2gseu4

Cross-lingual Supervision Improves Unsupervised Neural Machine Translation [article]

Mingxuan Wang, Hongxiao Bai, Hai Zhao, Lei Li
2021 arXiv   pre-print
Neural machine translation (NMT) is ineffective for zero-resource languages.  ...  Recent works exploring the possibility of unsupervised neural machine translation (UNMT) with only monolingual data can achieve promising results.  ...  For CUNMT, we also list results of different experiments settings.  ... 
arXiv:2004.03137v3 fatcat:ljolh2ya55a7dm5zfwdkdxqcmi

Producing Unseen Morphological Variants in Statistical Machine Translation

Matthias Huck, Aleš Tamchyna, Ondřej Bojar, Alexander Fraser
2017 Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers  
Our approach is novel in that it is integrated in decoding and takes advantage of context information from both the source language and the target language sides.  ...  Different from most previous work, we do not separate morphological prediction from lexical choice into two consecutive steps.  ...  of Morphosyntax for Statistical Machine Translation (Phase Two).  ... 
doi:10.18653/v1/e17-2059 dblp:conf/eacl/FraserHBT17 fatcat:yk5y3qppsbft7depbvjbur43sm

On Systematic Style Differences between Unsupervised and Supervised MT and an Application for High-Resource Machine Translation [article]

Kelly Marchisio, Markus Freitag, David Grangier
2022 arXiv   pre-print
Modern unsupervised machine translation (MT) systems reach reasonable translation quality under clean and controlled data conditions.  ...  We compare translations from supervised and unsupervised MT systems of similar quality, finding that unsupervised output is more fluent and more structurally different in comparison to human translation  ...  , even in high-resource settings.  ... 
arXiv:2106.15818v2 fatcat:2r76w6wwq5e4rdguubtmenzone

Cyclic Scheduling of Flexible Job-shop with Time Window Constraints and Resource Capacity Constraints

Hongchang Zhang, Simon Collart-Dutilleul, Khaled Mesghouni
2015 IFAC-PapersOnLine  
For time window constraints, the duration for processing products on machines and the duration for translating products from one machine to another by transfers are taken into account; for the resource  ...  For time window constraints, the duration for processing products on machines and the duration for translating products from one machine to another by transfers are taken into account; for the resource  ...  CASE STUDY Numerical experiments In this section, for making the numerical experiment, we use the example presented in figure 1 , the MIP model in chapter 3, and the data in Table 2 and Table 3 .  ... 
doi:10.1016/j.ifacol.2015.06.184 fatcat:jecpbxntfna3lnivtmnlu4pzu4

Improving Neural Machine Translation with Pre-trained Representation [article]

Rongxiang Weng, Heng Yu, Shujian Huang, Weihua Luo, Jiajun Chen
2019 arXiv   pre-print
Experimental results on Chinese-English, German-English machine translation tasks show that our proposed model achieves improvement over strong Transformer baselines, while experiments on English-Turkish  ...  Monolingual data has been demonstrated to be helpful in improving the translation quality of neural machine translation (NMT).  ...  The different sizes of backtranslated corpus are shown in Table 3 . We translate 0.2M (small size), 0.4M (medium size) and 1M (big size) as pseudo data-sets.  ... 
arXiv:1908.07688v1 fatcat:yytvgscqdjfajan5fq3fqgf23q

Pivot Through English: Reliably Answering Multilingual Questions without Document Retrieval [article]

Ivan Montero, Shayne Longpre, Ni Lao, Andrew J. Frank, Christopher DuBois
2021 arXiv   pre-print
Analysis demonstrates the particular efficacy of this strategy over state-of-the-art alternatives in challenging settings: low-resource languages, with extensive distractor data and query distribution  ...  Existing methods for open-retrieval question answering in lower resource languages (LRLs) lag significantly behind English.  ...  In Figure 3 we plot accuracy of the most performant models from Tables 2 and 3 on each of the high, medium, and low resource language groups over different sizes of database on MKQA.  ... 
arXiv:2012.14094v2 fatcat:bbvne32bxvhixowjyq5fewpxre

An Analysis of Simple Data Augmentation for Named Entity Recognition [article]

Xiang Dai, Heike Adel
2020 arXiv   pre-print
models, especially for small training sets.  ...  Through experiments on two data sets from the biomedical and materials science domains (i2b2-2010 and MaSciP), we show that simple augmentation can boost performance for both recurrent and transformer-based  ...  convert data from a high-resource language to a lowresource language, using a bilingual dictionary and an unsupervised machine translation model in order to expand the machine translation training set  ... 
arXiv:2010.11683v1 fatcat:zsv2uqqmafej3jq7aip53cuqkm
« Previous Showing results 1 — 15 out of 88,278 results