44,433 Hits in 4.7 sec

Supervised training of spiking neural networks for robust deployment on mixed-signal neuromorphic processors [article]

Julian Büchel, Dmitrii Zendrikov, Sergio Solinas, Giacomo Indiveri, Dylan R. Muir
2021 arXiv   pre-print
Our method trains SNNs to perform temporal classification tasks by mimicking a pre-trained dynamical system, using a local learning rule from non-linear control theory.  ...  For neuromorphic implementation of Spiking Neural Networks (SNNs), mismatch causes parameter variation between identically-configured neurons and synapses.  ...  ACKNOWLEDGEMENTS This project has received funding in part by the European Union's Horizon 2020 ERC project NeuroAgents (Grant No. 724295); from the European Union's Horizon 2020 research and innovation programme for  ... 
arXiv:2102.06408v4 fatcat:n4lujh7tp5ebhcs3grgxedguum

A Novel Topology for End-to-end Temporal Classification and Segmentation with Recurrent Neural Network [article]

Taiyang Zhao
2019 arXiv   pre-print
For classification task, the spikes work quite well, but as to the segmentation task it does not provide boundaries information.  ...  Connectionist temporal classification (CTC) has matured as an alignment free to sequence transduction and shows competitive for end-to-end speech recognition.  ...  In this paper, we combine the temporal classification and segmentation (TCS) ability in one framework with the use of a new topology.  ... 
arXiv:1912.04784v1 fatcat:wpcrw2lfhvdpbl7p3lp2yyp5yy

Recurrent Residual Learning for Action Recognition [article]

Ahsan Iqbal, Alexander Richard, Hilde Kuehne, Juergen Gall
2017 arXiv   pre-print
The approach extends ResNet, a state of the art model for image classification.  ...  In this work, we propose a novel recurrent ConvNet architecture called recurrent residual networks to address the task of action recognition.  ...  Further, this work was supported by the AWS Cloud Credits for Research program.  ... 
arXiv:1706.08807v1 fatcat:rhmcnmnogbda7kotzzwrtoh3ey

Multi-Temporal Land Cover Classification with Sequential Recurrent Encoders

Marc Rußwurm, Marco Körner
2018 ISPRS International Journal of Geo-Information  
In our experiments, we visualize internal activations over a sequence of cloudy and non-cloudy images and find several recurrent cells, which reduce the input activity for cloudy observations.  ...  Hence, we assume that our network has learned cloud-filtering schemes solely from input data, which could alleviate the need for tedious cloud-filtering as a preprocessing step for many EO approaches.  ...  Furthermore, we thank the Leibnitz Supercomputing Centre (LRZ)for providing access to computational resources, such as the DGX-1 and P100servers and NVIDIA for providing one TITAN X GPU.  ... 
doi:10.3390/ijgi7040129 fatcat:xvyk75vykjhgffsi4apqacfir4

Recurrence Enhances the Spatial Encoding of Static Inputs in Reservoir Networks [chapter]

Christian Emmerich, René Felix Reinhart, Jochen Jakob Steil
2010 Lecture Notes in Computer Science  
We show that the network dynamics improve the nonlinear encoding of inputs in the reservoir state which can increase the task-specific performance.  ...  Therefore, we introduce attractor-based reservoir networks for processing of static patterns and compare their performance and encoding capabilities with a related feedforward approach.  ...  Except for Wine, all data sets are not linearly seperable and thus constitute nontrivial classification tasks. The introduced models are used for classification of each data set.  ... 
doi:10.1007/978-3-642-15822-3_19 fatcat:5w3gtcntabhtrpndyzbukeyeom

LatticeRnn: Recurrent Neural Networks Over Lattices

Faisal Ladhak, Ankur Gandhe, Markus Dreyer, Lambert Mathias, Ariya Rastrow, Björn Hoffmeister
2016 Interspeech 2016  
In this paper, we use LATTICERNNs for a classification task: each lattice represents the output from an automatic speech recognition (ASR) component of a spoken language understanding (SLU) system, and  ...  We present a new model called LATTICERNN, which generalizes recurrent neural networks (RNNs) to process weighted lattices as input, instead of sequences.  ...  Computation Over Lattices Neural network components can be broken down into two general types: temporal and non-temporal.  ... 
doi:10.21437/interspeech.2016-1583 dblp:conf/interspeech/LadhakGDMRH16 fatcat:fjuxesv57zf4vmjrvszctsbdea

Learning Sequence Representations by Non-local Recurrent Neural Memory [article]

Wenjie Pei, Xin Feng, Canmiao Fu, Qiong Cao, Guangming Lu, Yu-Wing Tai
2022 arXiv   pre-print
Typical methods for supervised sequence representation learning are built upon recurrent neural networks to capture temporal dependencies.  ...  To tackle this limitation, we propose the Non-local Recurrent Neural Memory (NRNM) for supervised sequence representation learning, which performs non-local operations by means of self-attention mechanism  ...  ,T in Equation 13 for a sequence with length T can be used for any sequence prediction task.  ... 
arXiv:2207.09710v1 fatcat:cugx4si7zzaypkiutlxnz4xyvi

Large-scale weakly supervised audio classification using gated convolutional neural network [article]

Yong Xu, Qiuqiang Kong, Wenwu Wang, Mark D. Plumbley
2017 arXiv   pre-print
In this paper, we present a gated convolutional neural network and a temporal attention-based localization method for audio classification, which won the 1st place in the large-scale weakly supervised  ...  A convolutional recurrent neural network (CRNN) with learnable gated linear units (GLUs) non-linearity applied on the log Mel spectrogram is proposed.  ...  The pooling size is 2*2 for the audio tagging sub-task while it is 1*2 for the sound event detection sub-task. One bi-directional gated recurrent neural network with 128 units is used.  ... 
arXiv:1710.00343v1 fatcat:6lge2gjqljemxcvvsb4kyykcke

Effective and Efficient Computation with Multiple-timescale Spiking Recurrent Neural Networks

Bojian Yin, Federico Corradi, Sander M. Bohté
2020 International Conference on Neuromorphic Systems 2020  
Here, for sequential and streaming tasks, we demonstrate how a novel type of adaptive spiking recurrent neural network (SRNN) is able to achieve state-of-the-art performance compared to other spiking neural  ...  From this, we calculate a >100x energy improvement for our SRNNs over classical RNNs on the harder tasks.  ...  The authors gratefully acknowledges the support from the organizers of the Capo Caccia Neuromorphic Cognition 2019 workshop and Neurotech CSA, as well as Jibin Wu and Saray Soldado Magraner for helpful  ... 
doi:10.1145/3407197.3407225 dblp:conf/icons2/YinCB20 fatcat:7danoa64kjbutjxv7reebnw25q

Temporal Action Localization with Pyramid of Score Distribution Features

Jun Yuan, Bingbing Ni, Xiaokang Yang, Ashraf A. Kassim
2016 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
Second, inter-frame consistency is further explored by incorporating PSDF into the state-of-the-art Recurrent Neural Networks, which gives additional performance gain in detecting actions in temporally  ...  We investigate the feature design and classification architectures in temporal action localization.  ...  [38] proposed a hybrid deep network for video classification, which uses LSTM networks on top of spatial and temporal CNN features. Vivek V. et al.  ... 
doi:10.1109/cvpr.2016.337 dblp:conf/cvpr/YuanNYK16 fatcat:mnbrvzi7yfdvxiktzkg5lgkupm

Convolutional Drift Networks for Video Classification [article]

Dillon Graham, Seyed Hamed Fatemi Langroudi, Christopher Kanan, and Dhireesha Kudithipudi
2017 arXiv   pre-print
Temporal information is often handled using hand-crafted features or Recurrent Neural Networks, but this can be overly specific or prohibitively complex.  ...  Analyzing spatio-temporal data like video is a challenging task that requires processing visual and temporal information effectively.  ...  Using these networks for temporal data (e.g. video analysis) introduces several new challenges, typically addressed using Recurrent Neural Networks (RNNs).  ... 
arXiv:1711.01201v1 fatcat:67nmasynqferjnlx765ltzpg54

GeThR-Net: A Generalized Temporally Hybrid Recurrent Neural Network for Multimodal Information Fusion [article]

Ankit Gandhi, Arjun Sharma, Arijit Biswas, Om Deshmukh
2016 arXiv   pre-print
M additional components are added to the network which extract discriminative but non-temporal cues from each modality.  ...  tasks.  ...  LSTM [9] , a Recurrent Neural Network (RNN) [36] architecture, has been extremely successful in temporal modelling and classification tasks such as handwriting recognition [8] , action recognition  ... 
arXiv:1609.05281v1 fatcat:is3cbcco7jgjpkchisy5k37t6m

Non-local Recurrent Neural Memory for Supervised Sequence Modeling [article]

Canmiao Fu and Wenjie Pei and Qiong Cao and Chaopeng Zhang and Yong Zhao and Xiaoyong Shen and Yu-Wing Tai
2019 arXiv   pre-print
Typical methods for supervised sequence modeling are built upon the recurrent neural networks to capture temporal dependencies.  ...  To tackle this limitation, we propose the Non-local Recurrent Neural Memory (NRNM) for supervised sequence modeling, which performs non-local operations to learn full-order interactions within a sliding  ...  ,T in Equation 8 for a sequence with length T can be used for any sequence prediction task such as step-wise prediction (like language modeling) or sequence classification (like action classification).  ... 
arXiv:1908.09535v1 fatcat:7bkh5at7zjghfidog6vntbjgja

Temporal HeartNet: Towards Human-Level Automatic Analysis of Fetal Cardiac Screening Video [article]

Weilin Huang, Christopher P. Bridge, J. Alison Noble, Andrew Zisserman
2017 arXiv   pre-print
The contributions of the paper are three-fold: (i) a convolutional neural network architecture is developed for a multi-task prediction, which is computed by sliding a 3x3 window spatially through convolutional  ...  This results in a spatial-temporal model that precisely describes detailed heart parameters in challenging US videos.  ...  Temporally Recurrent Network To incorporate temporal information into our detection network, we design region-level recurrent connections that compute temporal information at each spatial location of the  ... 
arXiv:1707.00665v1 fatcat:izgihl2tsreyvloxwi6rhyt5ca

Motif-topology and Reward-learning improved Spiking Neural Network for Efficient Multi-sensory Integration [article]

Shuncheng Jia, Ruichen Zuo, Tielin Zhang, Hongxing Liu, Bo Xu
2022 arXiv   pre-print
MR-SNN contains 13 types of 3-node Motif topologies which are first extracted from independent single-sensory learning paradigms and then integrated for multi-sensory classification.  ...  Network architectures and learning principles are key in forming complex functions in artificial neural networks (ANNs) and spiking neural networks (SNNs).  ...  And retrain the w i,j of the network with frozen Motif mask M r,l t . 5.Test the performance of SNNs using these new masks in the multisensory classification tasks, and make comparison.  ... 
arXiv:2202.06821v1 fatcat:mtjbngdawzattisceqbipfz5ue
« Previous Showing results 1 — 15 out of 44,433 results