8 Hits in 6.5 sec

Cross-model Back-translated Distillation for Unsupervised Machine Translation [article]

Xuan-Phi Nguyen, Shafiq Joty, Thanh-Tung Nguyen, Wu Kui, Ai Ti Aw
2021 arXiv   pre-print
Crucially, iterative back-translation and denoising auto-encoding for language modeling provide data diversity to train the UMT systems.  ...  Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, language modeling and iterative back-translation, though they may apply them differently.  ...  Xuan-Phi Nguyen is supported by the A*STAR Computing and Information Science (ACIS) scholarship, provided by the Agency for Science, Technology and Research Singapore (A*STAR).  ... 
arXiv:2006.02163v4 fatcat:p7trbujrjvg6bik7sbqgwfsktq

Pretrained Language Models for Text Generation: A Survey [article]

Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen
2022 arXiv   pre-print
The resurgence of deep learning has greatly advanced this field, in particular, with the help of neural generation models based on pre-trained language models (PLMs).  ...  Text Generation aims to produce plausible and readable text in a human language from input data.  ...  Prompt-Tuning for Text Generation Most generative PLMs are pre-trained using language modeling objectives and then fine-tuned on text generation tasks with task-specific objectives.  ... 
arXiv:2201.05273v4 fatcat:pnffabspsnbhvo44gbaorhxc3a

Achieving Human Parity on Automatic Chinese to English News Translation [article]

Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, Shujie Liu, Tie-Yan Liu (+11 others)
2018 arXiv   pre-print
Millions of people are using it today in online translation systems and mobile applications in order to communicate across language barriers.  ...  The question naturally arises whether such systems can approach or achieve parity with human translations.  ...  We wish to acknowledge the tremendous progress in sequence-to-sequence modeling made by the entire research community that paved the road for this achievement.  ... 
arXiv:1803.05567v2 fatcat:7cloanb32fbvbexl47bikdoqma

Multimodal Research in Vision and Language: A Review of Current and Emerging Trends [article]

Shagun Uppal, Sarthak Bhagat, Devamanyu Hazarika, Navonil Majumdar, Soujanya Poria, Roger Zimmermann, Amir Zadeh
2020 arXiv   pre-print
More recently, this has enhanced research interests in the intersection of the Vision and Language arena with its numerous applications and fast-paced growth.  ...  We look at its applications in their task formulations and how to solve various problems related to semantic perception and content generation.  ...  Tan and Bansal [324] proposed LXMERT, a cross-modal transformer for encapsulating vision-language connections by utilizing three specialized encoders corresponding to object relationships, language and  ... 
arXiv:2010.09522v2 fatcat:l4npstkoqndhzn6hznr7eeys4u

Review of end-to-end speech synthesis technology based on deep learning [article]

Zhaoxi Mu, Xinyu Yang, Yizhuo Dong
2021 arXiv   pre-print
Moreover, this paper also summarizes the open-source speech corpus of English, Chinese and other languages that can be used for speech synthesis tasks, and introduces some commonly used subjective and  ...  has more powerful modeling ability and a simpler pipeline.  ...  [212] introduced cross-language transfer learning into Tacotron.  ... 
arXiv:2104.09995v1 fatcat:q5lx74ycx5hobjox4ktl3amfta

Text Adversarial Attacks and Defenses: Issues, Taxonomy, and Perspectives

Xu Han, Ying Zhang, Wei Wang, Bin Wang, Yanhui Guo
2022 Security and Communication Networks  
Second, we propose a novel taxonomy for the existing adversarial attacks and defenses, which is fine-grained and closely aligned with practical applications.  ...  Adversarial examples were early discovered in computer vision (CV) field when the models were fooled by perturbing the original inputs, and they also exist in natural language processing (NLP) community  ...  Acknowledgments is study was supported in part by the National Key R&D Program of China under grant no. 2020YFB2103802 and in part by the Fundamental Research Funds for the Central Universities of China  ... 
doi:10.1155/2022/6458488 fatcat:eprramkfkvdofm6opvjksasg2q

Leveraging Discourse Rewards for Document-Level Neural Machine Translation

Inigo Jauregi Unanue, Nazanin Esmaili, Gholamreza Haffari, Massimo Piccardi
2020 Proceedings of the 28th International Conference on Computational Linguistics   unpublished
17:36 Enhancing Neural Models with Vulnerability via Adversarial Attack Rong Zhang, Qifei Zhou, Bo An, Weiping Li, Tong Mo and Bo Wu 17:36-17:42 R-VGAE: Relational-variational Graph Autoencoder for Unsupervised  ...  and Yijiang Liu . . . . . . . . . . . . . . . . . . . . . . . . . . . 4190 A Co-Attentive Cross-Lingual Neural Model for Dialogue Breakdown Detection Qian Lin, Souvik Kundu and Hwee Tou Ng . . . . . .  ... 
doi:10.18653/v1/2020.coling-main.395 fatcat:gjghdqnknjdp7m6p6noobs7nxa

Improving Resource-constrained Machine Translation and Text generation Using Knowledge Transition

Despite the significant improvements of Neural Text Generation (NTG) systems such as Neural Machine Translation and Natural Language Generation, there are still some open challenges in this domain due  ...  This research addresses these limitations utilizing knowledge transition from high-resource NTG models to low-resource ones.  ...  This success can be credited to the NMT models' ability to learn cross-lingual representations.  ... 
doi:10.26180/18130757 fatcat:m2cubkl3ajg6vhprl3ozf5uaai