Filters








94 Hits in 1.7 sec

Excitation Backprop for RNNs [article]

Sarah Adel Bargal, Andrea Zunino, Donghyun Kim, Jianming Zhang, Vittorio Murino, Stan Sclaroff
2018 arXiv   pre-print
However, such studies are relatively lacking for models of spatiotemporal visual content - videos.  ...  Deep models are state-of-the-art for many vision tasks including video action recognition and video captioning.  ...  Acknowledgments We thank Kate Saenko and Vasili Ramanishka for helpful discussions.  ... 
arXiv:1711.06778v3 fatcat:io6onint6zbcvc6b5asfazrnee

Excitation Backprop for RNNs

Sarah Adel Bargal, Andrea Zunino, Donghyun Kim, Jianming Zhang, Vittorio Murino, Stan Sclaroff
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
Figure 1 : Our proposed framework spatiotemporally highlights/grounds the evidence that an RNN model used in producing a class label or caption for a given input video.  ...  Our model employs a single backward pass to produce saliency maps that highlight the evidence that a given RNN used in generating its outputs.  ...  Acknowledgments We thank Kate Saenko and Vasili Ramanishka for helpful discussions.  ... 
doi:10.1109/cvpr.2018.00156 dblp:conf/cvpr/BargalZKZMS18 fatcat:oqv3nyo52fbehd3ic3cuvsn5wy

Towards Visually Explaining Video Understanding Networks with Perturbation [article]

Zhenqiang Li, Weimin Wang, Zuoyue Li, Yifei Huang, Yoichi Sato
2020 arXiv   pre-print
In this paper, we investigate a generic perturbation-based method for visually explaining video understanding networks.  ...  For networks taking visual information as input, one basic but challenging explanation method is to identify and visualize the input pixels/regions that dominate the network's prediction.  ...  EB-R (excitation backprop for RNNs) [2] firstly extended the Excitation Backprop attribution method to the framework for videos, to be specific, the CNN-RNN structure.  ... 
arXiv:2005.00375v2 fatcat:lmblc2gwtvadposwuqn3afizwa

RcTorch: a PyTorch Reservoir Computing Package with Automated Hyper-Parameter Optimization [article]

Hayden Joy, Marios Mattheakis, Pavlos Protopapas
2022 arXiv   pre-print
For an RNN this means that the gradients must be calculated at every time step.  ...  Like the feed forward networks before them, RNNs were waiting for their day in the sun.  ... 
arXiv:2207.05870v1 fatcat:y4zdfvgdebc27f4o4wlbsjyzw4

Adaptive Extreme Edge Computing for Wearable Devices

Erika Covi, Elisa Donati, Xiangpeng Liang, David Kappel, Hadi Heidari, Melika Payvand, Wei Wang
2021 Frontiers in Neuroscience  
We propose various solutions for biologically plausible models for continual learning in neuromorphic computing technologies for wearable sensors.  ...  Wearable devices are a fast-growing technology with impact on personal healthcare for both society and economy.  ...  Stefan Slesazeck for useful discussion on ferroelectric and memristive devices.  ... 
doi:10.3389/fnins.2021.611300 pmid:34045939 pmcid:PMC8144334 fatcat:5by77im5crcslgt7zj3wulzd5e

Interpreting video features: a comparison of 3D convolutional networks and convolutional LSTM networks [article]

Joonatan Mänttäri, Sofia Broomé, John Folkesson, Hedvig Kjellström
2020 arXiv   pre-print
A number of techniques for interpretability have been presented for deep learning in computer vision, typically with the goal of understanding what the networks have based their classification on.  ...  However, interpretability for deep video architectures is still in its infancy and we do not yet have a clear concept of how to decode spatiotemporal features.  ...  Building on Excitation backprop, Adel Bargal et al. (2018) produce saliency maps for video RNNs.  ... 
arXiv:2002.00367v2 fatcat:k3z6amfjfjggdkr43yvga5w7au

Learning the synaptic and intrinsic membrane dynamics underlying working memory in spiking neural network models [article]

Yinghao Li, Robert Kim, Terrence J Sejnowski
2020 bioRxiv   pre-print
Recurrent neural network (RNN) model trained to perform cognitive tasks is a useful computational tool for understanding how cortical circuits execute complex computations.  ...  Here, we developed a method to directly train not only synaptic-related variables but also membrane-related parameters of a spiking RNN model.  ...  Fig. 2 | 2 RNNs trained for two additional WM tasks.  ... 
doi:10.1101/2020.06.11.147405 fatcat:yoi2wwgrifcslbe3uagtosnfpe

Temporal HeartNet: Towards Human-Level Automatic Analysis of Fetal Cardiac Screening Video [article]

Weilin Huang, Christopher P. Bridge, J. Alison Noble, Andrew Zisserman
2017 arXiv   pre-print
(ii) an anchor mechanism and Intersection over Union (IoU) loss are applied for improving localization accuracy.  ...  The contributions of the paper are three-fold: (i) a convolutional neural network architecture is developed for a multi-task prediction, which is computed by sliding a 3x3 window spatially through convolutional  ...  Fig. 3 : Attention maps of the heart orientation on test images, which are computed by the Excitation Backprop scheme described in [12] . Experimental results.  ... 
arXiv:1707.00665v1 fatcat:izgihl2tsreyvloxwi6rhyt5ca

Spatio-Temporal Perturbations for Video Attribution

Zhenqiang Li, Weimin Wang, Zuoyue Lia, Yifei Huang, Yoichi Sato
2021 IEEE transactions on circuits and systems for video technology (Print)  
The attribution method provides a direction for interpreting opaque neural networks in a visual way by identifying and visualizing the input regions/pixels that dominate the output of a network.  ...  However, most existing attribution methods focus on explaining networks taking a single image as input and a few works specifically devised for video attribution come short of dealing with diversified  ...  For the CNN-RNN structure, EB-R (Excitation BP for RNNs) [27] extended the Excitation BP attribution method to adapt to the structure of the RNN.  ... 
doi:10.1109/tcsvt.2021.3081761 fatcat:om6pqhlaczfvpjqlvdmfm7j4ti

A tutorial survey of architectures, algorithms, and applications for deep learning

Li Deng
2014 APSIPA Transactions on Signal and Information Processing  
classification and for feature learning.  ...  Deep learning refers to a class of machine learning techniques, developed largely since 2006, where many stages of non-linear information processing in hierarchical architectures are exploited for pattern  ...  [51] have reported excellent results on using RNNs for LM. More recently, Mesnil et al. [52] reported the success of RNNs in spoken language understanding.  ... 
doi:10.1017/atsip.2013.9 fatcat:4l4uonhhcffkbfot2fztpfxo2e

Human Emotion Recognition with Electroencephalographic Multidimensional Features by Hybrid Deep Neural Networks

Youjun Li, Jiajin Huang, Haiyan Zhou, Ning Zhong
2017 Applied Sciences  
EEG MFI sequences to recognize human emotional states where the hybrid deep neural network combined the Convolution Neural Networks (CNN) and Long Short-Term-Memory (LSTM) Recurrent Neural Networks (RNN  ...  For example, in the mental health care field, an automatic emotion analysis system can be constructed with our method to monitor the emotional variation of the subjects.  ...  The emotion scores by the Subjects are often based on the most exciting part of the entire video. Therefore, we needed to model the context information for long-term sequences.  ... 
doi:10.3390/app7101060 fatcat:beajya3p3ffttjsun4s2wg74oi

A unified theory for the origin of grid cells through the lens of pattern formation

Ben Sorscher, Gabriel Mel, Surya Ganguli, Samuel A. Ocko
2019 Neural Information Processing Systems  
There are currently two seemingly unrelated frameworks for understanding these patterns.  ...  These results unify previous accounts of grid cell firing and provide a novel framework for predicting the learned representations of recurrent neural networks.  ...  Moreover, our theory predicts why hexagonal grids should emerge in RNNs trained to path integrate, but it does not explain how RNNs trained via backprop learn to stabilize and update these patterns in  ... 
dblp:conf/nips/SorscherMGO19 fatcat:5djsqjtv25fpvpz5nfxw47ctyi

Deep learning and generative methods in cheminformatics and chemical biology: navigating small molecule space intelligently

Douglas B Kell, Soumitra Samanta, Neil Swainston
2020 Biochemical Journal  
We give a high-level (non-mathematical) background to the deep learning revolution, and set out the crucial issue for chemical biology and informatics as a two-way mapping from the discrete nature of individual  ...  network; GRU, Gated Recurrent Units; LSTM, long short-term memory; MLP, multilayer perceptron; QSAR, Quantitative structure-activity relationship; RBF, radial basis function; ReLU, rectified linear unit; RNN  ...  The exciting prospect is for the discovery of entirely novel reactions based on those presently known; this is precisely the area in which generative methods can excel.  ... 
doi:10.1042/bcj20200781 pmid:33290527 pmcid:PMC7733676 fatcat:ujd4s2xyfrcjtgqef44jf5yihm

Training Spiking Neural Networks Using Lessons From Deep Learning [article]

Jason K. Eshraghian and Max Ward and Emre Neftci and Xinxin Wang and Gregor Lenz and Girish Dwivedi and Mohammed Bennamoun and Doo Seok Jeong and Wei D. Lu
2022 arXiv   pre-print
The brain is the perfect place to look for inspiration to develop more efficient neural networks.  ...  Some ideas are well accepted and commonly used amongst the neuromorphic engineering community, while others are presented or justified for the first time here.  ...  for their support.  ... 
arXiv:2109.12894v4 fatcat:zujzdtzaijak5bklbqufrxr57q

Explainable Deep Learning Methods in Medical Imaging Diagnosis: A Survey [article]

Cristiano Patrício, João C. Neves, Luís F. Teixeira
2022 arXiv   pre-print
Moreover, this work reviews the existing medical imaging datasets and the existing metrics for evaluating the quality of the explanations.  ...  The black-box-ness of deep learning models has raised the need for devising strategies to explain the decision process of these models, leading to the creation of the topic of eXplainable Artificial Intelligence  ...  The sentence RNN produces the topic vectors whereas the word RNN receives the output of the sentence RNN and infers the words that constitute the final report.  ... 
arXiv:2205.04766v2 fatcat:ngd7yb3z7fhkxkuttjm73u75wi
« Previous Showing results 1 — 15 out of 94 results