Filters








131,924 Hits in 4.3 sec

Weighted Transformer Network for Machine Translation [article]

Karim Ahmed, Nitish Shirish Keskar, Richard Socher
2017 arXiv   pre-print
We propose Weighted Transformer, a Transformer with modified attention layers, that not only outperforms the baseline network in BLEU score but also converges 15-40% faster.  ...  State-of-the-art results on neural machine translation often use attentional sequence-to-sequence models with some form of convolution or recursion.  ...  The authors report results for neural machine translation that show the Transformer networks achieves state-of-the-art performance on the WMT 2014 English-to-German and English-to-French tasks while being  ... 
arXiv:1711.02132v1 fatcat:45u2pz33xjd3hcqq53uhzixxye

Monolingually Derived Phrase Scores for Phrase Based SMT Using Neural Networks Vector Representations [article]

Amir Pouya Aghasadeghi, Mohadeseh Bastan
2016 arXiv   pre-print
In this paper, we propose two new features for estimating phrase-based machine translation parameters from mainly monolingual data.  ...  Our method is based on two recently introduced neural network vector representation models for words and sentences.  ...  BACKGROUND In machine translation, neural networks were first used by [10, 11] . They used a neural network for example-based machine translation.  ... 
arXiv:1506.00406v3 fatcat:5x4chtbbargxbpprncx62v6brq

Research on Intelligent English Translation Method Based on the Improved Attention Mechanism Model

Rong Wang, Bai Yuan Ding
2021 Scientific Programming  
The use of neural machine algorithms for English translation is a hot topic in the current research.  ...  In this paper, we establish an attention coding and decoding model to address the shortcomings of traditional machine translation algorithms, combine the attention mechanism with a neural network framework  ...  For example, compared with CNN-based machine translation models, the self-attention mechanism focuses on all words in a sentence at the same time, which makes the Transformer model unable to learn local  ... 
doi:10.1155/2021/9667255 fatcat:iwf6c2ldqfbh3fapvhk233hzbm

OPPO NMT System for IWSLT 2019

Xiaopu Li, Zhengshan Xue, Jie Hao
2019 Zenodo  
This paper illustrates the OPPO's submission for IWSLT2019 text translation task Our system is based on Transformer architecture. Besides, we also study the effect of model ensembling.  ...  Sogou neural machine translation systems for wmt17[  ...  Baidu Neural Machine Translation Systems for WMT19[C]//Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1). 2019: 374-381.  ... 
doi:10.5281/zenodo.3525569 fatcat:56nijvzysvgwnamni2frekdury

Research on Neural Machine Translation Model

Mengyao Chen, Yong Li, Runqi Li
2019 Journal of Physics, Conference Series  
modeling and transduction problems for a long time, such as language modeling and machine translation.  ...  In neural machine translation (NMT), cyclic neural networks, especially long-term and short-term memory networks and gated recurrent neural networks, have been regarded as the latest methods for sequence  ...  Acknowledgments we express our sincere gratitude to Teacher Li Yong for his help in the process of writing the thesis.  ... 
doi:10.1088/1742-6596/1237/5/052020 fatcat:nghf3oryznatboysa2t4xswlmu

Hybrid Self-Attention Network for Machine Translation [article]

Kaitao Song, Xu Tan, Furong Peng, Jianfeng Lu
2018 arXiv   pre-print
The encoder-decoder is the typical framework for Neural Machine Translation (NMT), and different structures have been developed for improving the translation performance.  ...  Experimental results on three machine translation tasks show that our proposed framework outperforms the Transformer baseline significantly and achieves superior results over state-of-the-art NMT systems  ...  Disan: Directional self-attention network Weighted transformer network for machine translation. for rnn/cnn-free language understanding. arXiv. CoRR.  ... 
arXiv:1811.00253v3 fatcat:nq2l4wlfufevbcro5r3yuqwq44

BioNMT: A Biomedical Neural Machine Translation System

Hongtao Liu, Yanchun Liang, Liupu Wang, Xiaoyue Feng, Renchu Guan
2020 International Journal of Computers Communications & Control  
The proposed biomedical neural machine translation system (BioNMT) adopts the sequence-to-sequence translation framework, which is based on deep neural networks.  ...  model and external dictionaries to build a novel translation model for biomedical texts based on the transformer model.  ...  become the mainstream model for machine translation.  ... 
doi:10.15837/ijccc.2020.6.3988 fatcat:5y7lmkm7pjg5rcayj35wtfbuvq

Normalization of Input-output Shared Embeddings in Text Generation Models [article]

Jinyang Liu, Yujia Zhai, Zizhong Chen
2020 arXiv   pre-print
Machine Translation, Text Summarization), in which input and output both have huge sizes of vocabularies.  ...  Neural Network based models have been state-of-the-art models for various Natural Language Processing tasks, however, the input and output dimension problem in the networks has still not been fully resolved  ...  For example, Transformer-based models [Vaswani et al., 2017; Ott et al., 2018] have been state-of-the-art models for tasks such as Machine Translation, and huge pre-trained Neural Network models [Devlin  ... 
arXiv:2001.07885v2 fatcat:veavhftynvb2dgnlddyc6wiq6i

Research on the Efficiency of Intelligent Algorithm for English Speech Recognition and Sentence Translatione

Gang Zhang
2021 Informatica (Ljubljana, Tiskana izd.)  
had the lowest error rate for the translation of speech recognition results, and the translation gained the highest rating in the evaluation of ten professional translators.  ...  Machine translation has been gradually widely used to improve the efficiency of English translation.  ...  [5] used a character-level convolution network for machine translation.  ... 
doi:10.31449/inf.v45i2.3564 fatcat:nt5jk3sccned3hilxli7ja47gq

DLBT: Deep Learning-Based Transformer to Generate Pseudo-Code from Source Code

Walaa Gad, Anas Alokla, Waleed Nazih, Mustafa Aref, Abdel-badeeh Salem
2022 Computers Materials & Continua  
Recently, neural machine translation is used to generate textual descriptions for the source code.  ...  The experiments show promising performance results compared with other machine translation methods such as Recurrent Neural Network (RNN).  ...  , W K i is the weight of head i for keys and W V i is the weight of head i for values.  ... 
doi:10.32604/cmc.2022.019884 fatcat:d3e3fwd7wngp5otpqpax6i34gi

Playing the lottery with rewards and multiple languages: lottery tickets in RL and NLP [article]

Haonan Yu, Sergey Edunov, Yuandong Tian, Ari S. Morcos
2020 arXiv   pre-print
Notably, we are able to find winning ticket initializations for Transformers which enable models one-third the size to achieve nearly equivalent performance.  ...  and large-scale Transformer models (Vaswani et al., 2017).  ...  MACHINE TRANSLATION WITH TRANSFORMERS We next evaluate whether winning tickets are present in Transformer models trained on machine translation.  ... 
arXiv:1906.02768v3 fatcat:7eb3nb7hyjfonb3nx5q37lnlpq

What's Hidden in a One-layer Randomly Weighted Transformer? [article]

Sheng Shen, Zhewei Yao, Douwe Kiela, Kurt Keutzer, Michael W. Mahoney
2021 arXiv   pre-print
To find subnetworks for one-layer randomly weighted neural networks, we apply different binary masks to the same weight matrix to generate different layers.  ...  We demonstrate that, hidden within one-layer randomly weighted neural networks, there exist subnetworks that can achieve impressive performance, without ever modifying the weight initializations, on machine  ...  We would like to acknowledge DARPA, IARPA, NSF, and ONR for providing partial support of this work.  ... 
arXiv:2109.03939v1 fatcat:h7cbn7yzprgtto27laoqu5igt4

Optimization of English Machine Translation by Deep Neural Network under Artificial Intelligence

Xiaohua Guo, Vijay Kumar
2022 Computational Intelligence and Neuroscience  
First, the work implements a deep learning translation network model for English translation. On this basis, the neural machine translation model is designed under transfer learning.  ...  To improve the function of machine translation to adapt to global language translation, the work takes deep neural network (DNN) as the basic theory, carries out transfer learning and neural network translation  ...  Neural machine translation (NMT) gradually becomes an important research field in MT [9] , thanks to its advanced translation performance. e neurons of neural network are similar to those of the human  ... 
doi:10.1155/2022/2003411 pmid:35498202 pmcid:PMC9050287 fatcat:djcin3t6y5gbhb27pjgxcgcg3e

On the High Dimentional Information Processing in Quaternionic Domain and its Applications

Sushil Kumar, Bipin Kumar Tripathi
2018 International Journal of Advances in Applied Sciences  
This paper presents a well generalized learning machine with a quaternionic domain neural network that can finely process magnitude and phase information of high dimension data without any hassle.  ...  The conventional real-valued neural networks are tried to solve the problem associated with high dimensional parameters, but the required network structure possesses high complexity and are very time consuming  ...  Scaling, translation and rotation The learning of QDNN for general linear transformation (scaling factor 1/2, counterclockwise rotation about the x-axis by /2 radian, and translation by (0,0,0.3)) is performed  ... 
doi:10.11591/ijaas.v7.i2.pp177-190 fatcat:ttohrtontrh7lblf7xdtdv3ylu

Transformer++ [article]

Prakhar Thapak, Prodip Hore
2020 arXiv   pre-print
Recent advancements in attention mechanisms have replaced recurrent neural networks and its variants for machine translation tasks.  ...  networks.  ...  We call this Transformer++. I. INTRODUCTION Neural machine translation addresses the problem of translating one language into another language using neural networks.  ... 
arXiv:2003.04974v1 fatcat:gs7l5pos3zazrewefuvmbl24da
« Previous Showing results 1 — 15 out of 131,924 results