A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Refinement of matching costs for stereo disparities using recurrent neural networks
2021
EURASIP Journal on Image and Video Processing
Exploiting this sequential information, we aimed to determine the position of the correct matching point by using recurrent neural networks, as in the case of speech processing problems. ...
Žbontar and LeCun proposed MC-CNN [16] in which two CNN-based Siamese networks are introduced named as fast (MC-CNN-Fast) and accurate (MC-CNN-Acrt) networks. ...
16.03
16.80
17.91
20.14
7.4006
35.12
MC-CNN-Acrt
13.18
13.81
14.76
16.86
6.5936
35.13
MC-CNN-Fast + CR-RNN C
13.53
14.36
15.65
18.66
4.8488
26.84
MC-CNN-Fast + CR-RNN P
12.89 ...
doi:10.1186/s13640-021-00551-9
fatcat:2xbvb5cjufbjdpibbizhkhb6tq
Speeding up Context-based Sentence Representation Learning with Non-autoregressive Convolutional Decoding
[article]
2018
arXiv
pre-print
After that, we designed a model which still keeps an RNN as the encoder, while using a non-autoregressive convolutional decoder. ...
We empirically show that our model is simple and fast while producing rich sentence representations that excel in downstream tasks. ...
As presented, our designed asymmetric RNN-CNN model has strong transferability, and is overall better than existing unsupervised models in terms of fast training speed and good performance on evaluation ...
arXiv:1710.10380v3
fatcat:wgckw2cz7bdb7atat74gg4fcxa
Semantic Regularisation for Recurrent Image Annotation
[article]
2016
arXiv
pre-print
Importantly this makes the end-to-end training of the CNN and RNN slow and ineffective due to the difficulty of back propagating gradients through the RNN to train the CNN. ...
Existing models use the weakly semantic CNN hidden layer or its transform as the image embedding that provides the interface between the CNN and RNN. ...
It thus allows for better and more efficient fine-tuning of the CNN module, as well as fast convergence in end-to-end training of the full CNN-RNN model. ...
arXiv:1611.05490v1
fatcat:hedw7uovwjgftp2wwi4uvaoi7y
Semantic Regularisation for Recurrent Image Annotation
2017
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Importantly this makes the end-to-end training of the CNN and RNN slow and ineffective due to the difficulty of back propagating gradients through the RNN to train the CNN. ...
Existing models use the weakly semantic CNN hidden layer or its transform as the image embedding that provides the interface between the CNN and RNN. ...
It thus allows for better and more efficient fine-tuning of the CNN module, as well as fast convergence in end-to-end training of the full CNN-RNN model. ...
doi:10.1109/cvpr.2017.443
dblp:conf/cvpr/LiuXHYS17
fatcat:gkgkysdyrfdrzjmvyeuzw3xkvu
Speeding up Context-based Sentence Representation Learning with Non-autoregressive Convolutional Decoding
2018
Proceedings of The Third Workshop on Representation Learning for NLP
After that, we designed a model which still keeps an RNN as the encoder, while using a non-autoregressive convolutional decoder. ...
We empirically show that our model is simple and fast while producing rich sentence representations that excel in downstream tasks. ...
As presented, our designed asymmetric RNN-CNN model has strong transferability, and is overall better than existing unsupervised models in terms of fast training speed and good performance on evaluation ...
doi:10.18653/v1/w18-3009
dblp:conf/rep4nlp/TangJFWS18
fatcat:azhyp36ptnd7lh7jnzrpn3wtqm
Real-time Memory Efficient Large-pose Face Alignment via Deep Evolutionary Network
[article]
2019
arXiv
pre-print
Afterward, a simple and effective CNN feature is extracted and fed to Recurrent Neural Network (RNN) for evolutionary learning. ...
However, impact factors such as large pose variation and computational inefficiency, still hinder its broad implementation. ...
Note that the experiments of fast RNN module is implemented based on Fast DHM CNN ×2. ...
arXiv:1910.11818v2
fatcat:vs4nz7vvfrezpcu2gjjvykwjry
Deep Recurrent Neural Network Reveals a Hierarchy of Process Memory during Dynamic Natural Vision
[article]
2017
bioRxiv
pre-print
Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. ...
The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. ...
As described below, the same training methods were used regardless of whether the RNN or the CNN was used as the feature model. ...
doi:10.1101/177196
fatcat:42o7v3pbg5cpbnsh66keatv3py
Exploiting Multi-layer Features Using a CNN-RNN Approach for RGB-D Object Recognition
[chapter]
2019
Lecture Notes in Computer Science
It first employs a pre-trained CNN model as the underlying feature extractor to get visual features at different layers for RGB and depth modalities. ...
In order to utilize the CNN model trained on large-scale RGB datasets for depth domain, depth images are converted to a representation similar to RGB images. ...
We use the pre-trained VGG-f model as a feature extractor without finetuning. Therefore, the procedure requires no training at feature extraction stage and works fast. ...
doi:10.1007/978-3-030-11015-4_51
fatcat:a4iezbiv3vfrraal333akeipba
Deep Learning Based Energy Disaggregation and On/Off Detection of Household Appliances
[article]
2019
arXiv
pre-print
Based on the same dataset, we show that for the task of on/off detection the second framework, i.e., directly training a binary classifier, achieves better performance in terms of F1 score. ...
Neural network models can learn complex patterns from large amounts of data and have been shown to outperform the traditional machine learning methods such as variants of hidden Markov models. ...
ACKNOWLEDGEMENT This work was carried out as part of the "HomeSense: digital sensors for social research" project funded by the Economic and Social Research Council (grant ES/N011589/1) through the National ...
arXiv:1908.00941v2
fatcat:fxrwhhdtvjconalzvf4pmncss4
Fast Prospective Detection of Contrast Inflow in X-ray Angiograms with Convolutional Neural Network and Recurrent Neural Network
[chapter]
2017
Lecture Notes in Computer Science
The first approach trains a convolutional neural network (CNN) to distinguish whether a frame has contrast agent or not. ...
As the proposed methods work in prospective settings and run fast, they have the potential of being used in clinical practice. ...
While the CNN-based method ran very fast and used on average only 14 ms to process one frame. ...
doi:10.1007/978-3-319-66179-7_52
fatcat:n55fyy7g6nbptcbb2kmtgdqvaq
Fast Detection of Distributed Denial of Service Attacks in VoIP Networks Using Convolutional Neural Networks
2020
International Journal of Intelligent Computing and Information Sciences
Our experiments find that the CNN model achieved a high F1 score (99-100%) as another deep learning approach that utilizes Recurrent Neural Network (RNN) but with less detection time. ...
Full deep neural network model based on CNN 132 W. Nazih et al ...
As shown in the table, CNN-AVG, CNN-MAX, and RNN-GUR almost detect different types of DDoS attacks, while the l 1 -SVM didn't detect low-rate attacks. ...
doi:10.21608/ijicis.2021.51555.1046
fatcat:dexh6czbk5c3xnmc2xqgyrnarq
Real-Time Object Recognition using Region based Convolution Neural Network and Recursive Neural Network
2019
International journal of recent technology and engineering
Specifically, we applied the transfer learning approach to pass the load parameters which were pre-trained on the Image web dataset to the RNN portion and follow a custom loss feature for the model to ...
In particular, a hybrid approach using R-CNN and RNN has been proposed that improve the accuracy of object recognition and learn structured image attributes and begin image analysis. ...
RNN can be viewed as one of the most efficient systems that find combined working stages of CNN and highway networks. ...
doi:10.35940/ijrte.d8326.118419
fatcat:3znfgj5mhrh4xoq6bexwul2hv4
Speech Command Recognition using Artificial Neural Networks
2020
JOIV: International Journal on Informatics Visualization
the RNN as our models. ...
RNN diagram
result showed that among models trained on 20 different speech commands, CNN in combination with the RNN achieved 94.79% validation accuracy, 96.66 test accuracy and 0.117 loss whereas, CNN ...
doi:10.30630/joiv.4.2.358
fatcat:pozzuovuqjf6xnrsu3ib63lvya
A Study on a Joint Deep Learning Model for Myanmar Text Classification
2020
2020 IEEE Conference on Computer Applications(ICCA)
The comparative analysis is performed on the baseline Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN) and their combined model CNN-RNN. ...
This paper uses pre-trained word vectors to handle the resource-demanding problem and studies the effectiveness of a joint Convolutional Neural Network and Long Short Term Memory (CNN-LSTM) for Myanmar ...
We greatly thanks all of the researchers who shared pre-trained words vectors publicly and their works very helpful to accomplish our works and very useful for resource-scarce languages. ...
doi:10.1109/icca49400.2020.9022809
fatcat:hrbkpumwlbax5jflqbxetmy4bq
Evaluation of Adversarial Training on Different Types of Neural Networks in Deep Learning-based IDSs
[article]
2020
arXiv
pre-print
networks (CNN) and recurrent neural networks (RNN). ...
We use the min-max approach to formulate the problem of training robust IDS against adversarial examples using two benchmark datasets. ...
We built ten adversarially trained models for each ANN, CNN, and RNN architectures, as shown in Fig 6 . In total, we built 30 adversarially trained IDS models on top of the six baseline IDS models. ...
arXiv:2007.04472v1
fatcat:3qothuaw6fh55idtksh453nd3a
« Previous
Showing results 1 — 15 out of 12,348 results