A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Two-Way Neural Network Chinese-English Machine Translation Model Fused with Attention Mechanism
2022
Scientific Programming
The word vector uses a neural network to achieve direct mapping. Research on constructing neural machine translation models for different neural network structures. ...
Based on the translation model of the LSTM network, the gate valve mechanism reduces the gradient attenuation and improves the ability to process long-distance sequences. ...
Different translation models are constructed for comparison and analysis of three kinds of neural networks: recurrent neural network, long-term short-term memory, and gated recurrent unit. e attention ...
doi:10.1155/2022/1270700
fatcat:jxpmbauk35byrmvclo4f5qxkji
Decoder Integration and Expected BLEU Training for Recurrent Neural Network Language Models
2014
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
We show how a recurrent neural network language model can be optimized towards an expected BLEU loss instead of the usual cross-entropy criterion. ...
Neural network language models are often trained by optimizing likelihood, but we would prefer to optimize for a task specific metric, such as BLEU in machine translation. ...
Acknowledgments We thank Michel Galley, Arul Menezes, Chris Quirk and Geoffrey Zweig for helpful discussions related to this work as well as the four anonymous reviewers for their comments. ...
doi:10.3115/v1/p14-2023
dblp:conf/acl/AuliG14
fatcat:r3yu7g3gz5cdrndkmhqqkgqi6e
Advanced Convolutional Neural Network-Based Hybrid Acoustic Models for Low-Resource Speech Recognition
2020
Computers
Recently, the recurrent neural networks (RNNs) have shown great abilities for modeling long-term context dependencies. ...
Of these networks, convolutional neural network (CNN) is an effective network for representing the local properties of the speech formants. ...
[27] built TDNN models for Swahili and Tamil languages and acquired an absolute word error rate improvement of 0.5 and 0.7 over the DNN models. ...
doi:10.3390/computers9020036
fatcat:k54s5pj7grggffsbibjpm5jk2q
Sentimental analysis using recurrent neural network
2018
International Journal of Engineering & Technology
The model used is Recurrent Neural Networks Long Short-Term Memory, a deep learning technique to predict the sentiments analysis. ...
Sentiment analysis has been an important topic of discussion from two decades since Lee published his first paper on the sentimental analysis in 2002. ...
Acknowledgement I thank CHRIST Deemed to be University, Bangalore, for providing favorable environment for carrying out my Research work. ...
doi:10.14419/ijet.v7i2.27.12635
fatcat:zl47qyhc3ndrbk6q5zflnpv3xu
Robust Neural Language Translation Model Formulation using Seq2seq approach
2021
Zenodo
This study has gone through many procedures for machine translations and found the simplest and most effective way of applying an RNN-Language Model [5] , Feedforward Neural Network Language Model, to ...
Typical neural language models rely on a vector representation for each word, and the work used a fixed vocabulary for both languages. ...
doi:10.5281/zenodo.5270390
fatcat:2mlbemrhzbdrjk4uo6zmt66ndu
Prediction-Adaptation-Correction Recurrent Neural Networks for Low-Resource Language Speech Recognition
[article]
2015
arXiv
pre-print
In this paper, we investigate the use of prediction-adaptation-correction recurrent neural networks (PAC-RNNs) for low-resource speech recognition. ...
Our model outperforms other state-of-the-art neural networks (DNNs, LSTMs) on IARPA-Babel tasks. ...
Acknowledgements The authors would like to thank everyone in the Babelon team for feedback and support for various resources. ...
arXiv:1510.08985v1
fatcat:iqecic56k5cjfn3qsv7x7igx7m
Automatic speech recognition for launch control center communication using recurrent neural networks with data augmentation and custom language model
[article]
2018
arXiv
pre-print
We showed that data augmentation and custom language models can improve speech recognition accuracy. ...
We used bidirectional deep recurrent neural networks to train and test speech recognition performance. ...
For context-sensitive language decoding, it is important to learn long-term relationships between words. The solution is a recurrent neural network. ...
arXiv:1804.09552v1
fatcat:li4nsatnk5etrbmbdqg4tvpxau
AIA: Artificial intelligence for art
2018
EVA London 2018
"General" means that one AI program realises number of different tasks and the same code can be used in many applications. Artificial intelligence. Recurrent neural network. Reinforcement learning. ...
There is domain called AGI where will be possible to find solutions for this problems. ...
RECURRENT NEURAL NETWORKS Models of sequential data, such as natural language, speech and video, are the core of many machine-learning applications. ...
doi:10.14236/ewic/eva2018.5
dblp:conf/eva/Lisek18
fatcat:wcpjqcrm2zbpvdtnpt63zls2um
Prediction-adaptation-correction recurrent neural networks for low-resource language speech recognition
2016
2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
In this paper, we investigate the use of prediction-adaptationcorrection recurrent neural networks (PAC-RNNs) for lowresource speech recognition. ...
The information from the correction network is also used by the prediction network in a recurrent loop. Our model outperforms other state-of-theart neural networks (DNNs, LSTMs) on IARPA-Babel tasks. ...
The work uses the following language packs: Cantonese (IARPA-babel101-v0.4c), Turkish (IARPA-babel105b-v0. ...
doi:10.1109/icassp.2016.7472712
dblp:conf/icassp/ZhangCGY16
fatcat:zz7gnpwsifbuthv2yakdcuovma
Natural Language Processing with Improved Deep Learning Neural Networks
2022
Scientific Programming
This model is based on the feed-forward neural network model described above and will be used as a feature extractor. ...
This paper proposes a dependent syntactic analysis model based on a long-term memory neural network. ...
[19] proposed the use of a recurrent neural network to build a language model. e model uses the recurrent neural network to learn a distributed representation for each word while also modeling the word ...
doi:10.1155/2022/6028693
fatcat:6cyjzepiajfpbnw63jo6vyldga
Large scale recurrent neural network on GPU
2014
2014 International Joint Conference on Neural Networks (IJCNN)
Such a unique architecture enables the recurrent neural network to remember the past processed information and makes it an expressive model for nonlinear sequence processing tasks. ...
We then use the proposed GPU implementation to scale up the recurrent neural network and improve its performance. ...
recurrent neural network language model. ...
doi:10.1109/ijcnn.2014.6889433
dblp:conf/ijcnn/LiZHD0XZY14
fatcat:g2yuubvlfva4zlm72fdpwcpwrm
The implementation of a Deep Recurrent Neural Network Language Model on a Xilinx FPGA
[article]
2017
arXiv
pre-print
In this project, we design a Deep Recurrent Neural Network (DRNN) Language Model (LM) and implement a hardware accelerator with AXI Stream interface on a PYNQ board which is equipped with a XILINX ZYNQ ...
However, these applications mainly focus on large scale FPGA clusters which have an extreme processing power for executing massive matrix or convolution operations but are unsuitable for portable or mobile ...
RECURRENT NEURAL NETWORK AND LANGUAGE MODEL
A. Recurrent Neural Network An RNN is a kind of neural network that has a memory feature. ...
arXiv:1710.10296v3
fatcat:koz3owbrmrakdmemlj66p5mypa
Splitting source code identifiers using Bidirectional LSTM Recurrent Neural Network
[article]
2018
arXiv
pre-print
We introduce a bidirectional LSTM recurrent neural network to detect subtokens in source code identifiers. ...
The proposed network can be used to improve the upstream models which are based on source code identifiers, as well as improving developer experience allowing writing code without switching the keyboard ...
Character-level bidirectional recurrent neural network Character-level bidirectional recurrent neural networks (BiRNNs) [28] are a family of models that combine two recurrent networks moving through ...
arXiv:1805.11651v2
fatcat:yinfbkq2krh43i2a3uzobn63pi
Recurrent Neural Network Techniques: Emphasis on Use in Neural Machine Translation
2021
Informatica (Ljubljana, Tiskana izd.)
The techniques are divided into three categories including recurrent neural network, recurrent neural network with phrasebased models and recurrent neural techniques with graph-based models. ...
In this paper, machine translation techniques based on using recurrent neural networks are analyzed and discussed. ...
We divided the techniques into three categories which are: recurrent neural network, recurrent neural network with phrasebased models and recurrent neural network with graphbased model. ...
doi:10.31449/inf.v45i7.3743
fatcat:yytqgxixkrgijhoglpfyqz6a4i
Automatic Detection Technique for Speech Recognition based on Neural Networks Inter-Disciplinary
2018
International Journal of Advanced Computer Science and Applications
In particular, recurrent neural networks (RNNs) have several characteristics that make them a model of choice for automatic speech processing. ...
LSTM model were compared to two neural models: Multi-Layer Perceptron (MLP) and Elman's Recurrent Neural Network (RNN). ...
The three neuronal models tested are: Multi-Layer Perceptron (MLP), Elman's Recurrent Neural Network (RNN) and LSTM. ...
doi:10.14569/ijacsa.2018.090326
fatcat:uqxbzpitingxrneflg7i2bvfjq
« Previous
Showing results 1 — 15 out of 26,193 results