A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Generating Chinese Classical Poems with RNN Encoder-Decoder
[chapter]
2017
Lecture Notes in Computer Science
We find that, for the simulations we run, attention is a necessary and sufficient mechanism for learning generalizable reduplication. ...
This paper examines the generalization abilities of encoder-decoder networks on a class of subregular functions characteristic of natural language reduplication. ...
Simple (SRNN) and gated (GRU) recurrence relations were tested as the encoder and decoder recurrent layers. 6 In SRNN layers the network's state at any timepoint, h t , is dependent only on the input at ...
doi:10.1007/978-3-319-69005-6_18
fatcat:o6ipo2tecrg6tpvvo76nocvqzu
Page 5678 of Mathematical Reviews Vol. , Issue 99h
[page]
1999
Mathematical Reviews
Cohen, Learning a determinis- tic finite automaton with a recurrent neural network (90-101); José Ruiz, Salvador Espafia and Pedro Garcia [Pedro Luis Garcia Pérez], Locally threshold testable languages ...
-24); Rajesh Parekh, Codrin Nichitiu and Vasant Honavar, A polynomial time incremental algorithm for learn- ing DFA (37-49); Antonio Castellanos, Approximate learning of random subsequential transducers ...
Page 3184 of Mathematical Reviews Vol. , Issue 98E
[page]
1998
Mathematical Reviews
recurrent neural networks (262-273); F. ...
Sempere and Antonio Fos, Learning linear grammars from structural information (126-133); Sabine Deligne, Francois Yvon and Frédéric Bimbot, Introducing statistical de- pendencies and structural constraints ...
Personalized Purchase Prediction of Market Baskets with Wasserstein-Based Sequence Matching
2019
Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining - KDD '19
Our contributions are as follows: (1) We propose similarity matching based on subsequential dynamic time warping (SDTW) as a novel predictor of market baskets. ...
In fact, state-of-the-art approaches are limited to intuitive decision rules for pattern extraction. ...
Recurrent neural networks are effective at learning sequences, yet they return an output vector of a fixed size and can thus not adapt to the variable size of market baskets. ...
doi:10.1145/3292500.3330791
dblp:conf/kdd/KrausF19
fatcat:rrutdwigkrcqfciy2vsuyzude4
Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challenge
2018
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Deep Learning has had a transformative impact on Computer Vision, but for all of the success there is also a significant cost. ...
scientific progress in the field. ...
The former simply initializes and learns the weights like any others in the network. ...
doi:10.1109/cvpr.2018.00444
dblp:conf/cvpr/TeneyAHH18
fatcat:nbsvdg5qdrddfhyczlg7nxn7ny
Forecast the Plausible Paths in Crowd Scenes
2017
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Extensive experiments on public datasets demonstrate that our method obtains the state-of-the-art performance in both structured and unstructured scenes by exploring the complex and uncertain motion patterns ...
Specifically, we derive a social-aware LSTM to explore the crowd dynamic, resulting in a hidden feature embedding the rich prior in massive data. ...
Social-Aware LSTM In this section, we propose to learn a representation of the crowd dynamics with a recurrent LSTM by incorporating the nearby trajectories. ...
doi:10.24963/ijcai.2017/386
dblp:conf/ijcai/SuZDZ17
fatcat:mx357gte7bdnhlknja2bdo65yq
Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation
[article]
2018
arXiv
pre-print
In this paper, we propose a Recurrent Convolutional Neural Network (RCNN) based on U-Net as well as a Recurrent Residual Convolutional Neural Network (RRCNN) based on U-Net models, which are named RU-Net ...
Deep learning (DL) based semantic segmentation methods have been providing state-of-the-art performance in the last few years. ...
Here t=2 (0 ~ 2), refers to the recurrent convolutional operation that includes one single convolution layer followed by two subsequential recurrent convolutional layers. ...
arXiv:1802.06955v5
fatcat:lcg67b3wffea7ik7j6kzqbsbnu
Stochastic subgradient method converges on tame functions
[article]
2018
arXiv
pre-print
popular deep learning architectures. ...
In particular, this work endows the stochastic subgradient method, and its proximal extension, with rigorous convergence guarantees for a wide class of problems arising in data science---including all ...
loss functions that are recursively defined, including convolutional neural networks, recurrent neural networks, and feed-forward networks. ...
arXiv:1804.07795v3
fatcat:menztamfqjgyhgbz7363ycnwe4
Hyperspectral Image Classification with Deep Metric Learning and Conditional Random Field
[article]
2019
arXiv
pre-print
and the utilization of neural networks. ...
The deep metric learning model is supervised by the center loss to produce spectrum-based features that gather more tightly in Euclidean space within classes. ...
This category of algorithms mainly include convolutional neural network (CNN) [14] [15] [16] , recurrent neural network (RNN) [17] [18] [19] , and deep metric learning (DML) [20] [21] [22] , to name ...
arXiv:1903.06258v2
fatcat:kbqbqfotyjgbtkpyrefalibjda
Traffic Data Imputation and Prediction: An Efficient Realization of Deep Learning
2020
IEEE Access
In consequence, a forecasting model via deep learning based methods is proposed to predict the traffic flow from the recovered data set. ...
The experiments demonstrate the effectiveness of using deep learning based imputation in improving the accuracy of traffic flow prediction. ...
the typical deep neural network like SAEs, recent examples applied in ITSs include convolutional neural network (CNN) [23] , recurrent neural network (RNN) [24] ,
FIGURE 1 . 1 The construction of SAEs ...
doi:10.1109/access.2020.2978530
fatcat:3zrsqxfa7vh5ppkpxre4iirnie
Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challenge
[article]
2017
arXiv
pre-print
To help further research in the area, we describe in detail our high-performing, though relatively simple model. ...
The performance of deep neural networks for VQA is very dependent on choices of architectures and hyperparameters. ...
The former simply initializes and learns the weights like any others in the network. ...
arXiv:1708.02711v1
fatcat:v75rlypszrefjdwxgar7x3atkm
Opportunities for integrated photonic neural networks
2020
Nanophotonics
This paper specifically reviews the prospects of integrated optical solutions for accelerating inference and training of artificial neural networks. ...
In an echo state network with a linear output layer, the weights can be learned by a simple ridge regression. ...
the SOA network, despite the simple network architecture. ...
doi:10.1515/nanoph-2020-0297
fatcat:cfkpayw6jrf6riqb26woyi7yfu
Pay Attention to Raw Traces: A Deep Learning Architecture for End-to-End Profiling Attacks
2021
Transactions on Cryptographic Hardware and Embedded Systems
Many papers have recently investigated the abilities of deep learning in profiling traces. Some of them also aim at the countermeasures (e.g., masking) simultaneously. ...
With the renaissance of deep learning, the side-channel community also notices the potential of this technology, which is highly related to the profiling attacks in the side-channel context. ...
Meanwhile, the results of the simplification of our architecture imply the recurrent layer is critical in combining features. A simple attention structure (weighted sum) is far from enough. ...
doi:10.46586/tches.v2021.i3.235-274
fatcat:72sy4qeytrdnlaeeqzb7rjkn3y
Liquid State Machines: Motivation, Theory, and Applications
[chapter]
2011
Computability in Context
The Liquid State Machine (LSM) has emerged as a computational model that is more adequate than the Turing machine for describing computations in biological networks of neurons. ...
This chapter reviews the motivation for this model, its theoretical background, and current work on implementations of this model in innovative artificial computing devices. ...
for a subsequent linear learning device, that work well independently of the concrete computational tasks that are subsequentially learned by the learning device. ...
doi:10.1142/9781848162778_0008
fatcat:n675dx47ife4zohzsailtgqciu
On-Chip Passive Photonic Reservoir Computing with Integrated Optical Readout
2017
2017 IEEE International Conference on Rebooting Computing (ICRC)
In detail, we discuss how integrated reservoirs can be scaled up by injecting multiple copies of the input. ...
Reservoir computing (RC) [1] , [2] initially emerged as a means to train recurrent neural networks. ...
Apart from the considerable challenges in hardware development, an integrated optical readout also calls for novel machine learning techniques. ...
doi:10.1109/icrc.2017.8123673
dblp:conf/icrc/FreibergerKBD17
fatcat:f4zl3jvokrfeddrsuukhtjcwue
« Previous
Showing results 1 — 15 out of 100 results