Filters








154 Hits in 3.1 sec

Top-Down Neural Attention by Excitation Backprop [chapter]

Jianming Zhang, Zhe Lin, Jonathan Brandt, Xiaohui Shen, Stan Sclaroff
2016 Lecture Notes in Computer Science  
Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic  ...  We aim to model the top-down attention of a Convolutional Neural Network (CNN) classifier for generating task-specific attention maps.  ...  This research was supported in part by Adobe Research, US NSF grants 0910908 and 1029430, and gifts from NVIDIA.  ... 
doi:10.1007/978-3-319-46493-0_33 fatcat:csumsdperzbobbknkn272wjmu4

Top-down Neural Attention by Excitation Backprop [article]

Jianming Zhang, Zhe Lin, Jonathan Brandt, Xiaohui Shen, Stan Sclaroff
2016 arXiv   pre-print
Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic  ...  We aim to model the top-down attention of a Convolutional Neural Network (CNN) classifier for generating task-specific attention maps.  ...  This research was supported in part by Adobe Research, US NSF grants 0910908 and 1029430, and gifts from NVIDIA.  ... 
arXiv:1608.00507v1 fatcat:6po7lnmyknconefxrtbmjuzbpu

Excitation Backprop for RNNs [article]

Sarah Adel Bargal, Andrea Zunino, Donghyun Kim, Jianming Zhang, Vittorio Murino, Stan Sclaroff
2018 arXiv   pre-print
In this work, we devise a formulation that simultaneously grounds evidence in space and time, in a single pass, using top-down saliency.  ...  Grounding decisions made by deep networks has been studied in spatial visual content, giving more insight into model predictions for images.  ...  This work was supported in part by NSF grants 1551572 and 1029430, an IBM PhD Fellowship, gifts from Adobe and NVidia, and Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior  ... 
arXiv:1711.06778v3 fatcat:io6onint6zbcvc6b5asfazrnee

Excitation Backprop for RNNs

Sarah Adel Bargal, Andrea Zunino, Donghyun Kim, Jianming Zhang, Vittorio Murino, Stan Sclaroff
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
In this work, we devise a formulation that simultaneously grounds evidence in space and time, in a single pass, using top-down saliency.  ...  Grounding decisions made by deep networks has been studied in spatial visual content, giving more insight into model predictions for images.  ...  This work was supported in part by NSF grants 1551572 and 1029430, an IBM PhD Fellowship, gifts from Adobe and NVidia, and Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior  ... 
doi:10.1109/cvpr.2018.00156 dblp:conf/cvpr/BargalZKZMS18 fatcat:oqv3nyo52fbehd3ic3cuvsn5wy

Lateral Inhibition-Inspired Convolutional Neural Network for Visual Attention and Saliency Detection

Chunshui Cao, Yongzhen Huang, Zilei Wang, Liang Wang, Ninglong Xu, Tieniu Tan
2018 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
In this paper, we propose to formulate lateral inhibition inspired by the related studies from neurobiology, and embed it into the top-down gradient computation of a general CNN for classification, i.e  ...  In our recent research, we find that modeling lateral inhibition in convolutional neural network (LICNN) is very useful for visual attention and saliency detection.  ...  This work is also supported by grants from NVIDIA and the NVIDIA DGX-1 AI Supercomputer.  ... 
doi:10.1609/aaai.v32i1.12238 fatcat:nbgjljkzx5dalgvjtktbnwjyju

Large-Scale Gradient-Free Deep Learning with Recursive Local Representation Alignment [article]

Alexander Ororbia, Ankur Mali, Daniel Kifer, C. Lee Giles
2020 arXiv   pre-print
This is empirical evidence that a backprop-free algorithm can scale up to larger datasets.  ...  In this paper, we propose a gradient-free learning procedure, recursive local representation alignment, for training large-scale neural architectures.  ...  Finally, in Figure 7 , for the trained CIFAR-10 networks, we visualize the top-most latent representations acquired by those trained by backprop and rec-LRA, using t-SNE [56] .  ... 
arXiv:2002.03911v3 fatcat:2k326rdnnjhutfmthzqrvrsxui

TSGB: Target-Selective Gradient Backprop for Probing CNN Visual Saliency [article]

Lin Cheng, Pengfei Fang, Yanjie Liang, Liao Zhang, Chunhua Shen, Hanzi Wang
2022 arXiv   pre-print
The explanation for deep neural networks has drawn extensive attention in the deep learning community over the past few years.  ...  Inspired by those observations, we propose a novel visual saliency method, termed Target-Selective Gradient Backprop (TSGB), which leverages rectification operations to effectively emphasize target classes  ...  These methods pay attention to extensive existing objects, similar to [26] . Excitation Backprop (EBP) [28] uses the contrastive marginal winning probability to propagate the top-down attention.  ... 
arXiv:2110.05182v2 fatcat:4pdtc6nervcefe4hmtyrqdauem

Predictive Coding: a Theoretical and Experimental Review [article]

Beren Millidge, Anil Seth, Christopher L Buckley
2022 arXiv   pre-print
mathematical models of predictive coding, as well as in evaluating their potential biological plausibility for implementation in the brain and the concrete neurophysiological and psychological predictions made by  ...  feedback which preferentially excites certain neurons which best match with the top-down attentional preferences, and this feedback enhances their activity, helping them to inhibit the activity of their  ...  This capability is made possible by the top-down predictions conveyed by the hierarchical predictive coding network.  ... 
arXiv:2107.12979v4 fatcat:wfzvlaek7zbfhnhda4ljxuvyh4

Post-Selections in AI and How to Avoid Them [article]

Juyang Weng
2021 arXiv   pre-print
Neural network based Artificial Intelligence (AI) has reported increasing scales in experiments.  ...  We then analyze why error-backprop from randomly initialized weights suffers from severe local minima, why PSUVS lacks cross-validation, why PSUTS violates well-established protocols, and why every paper  ...  From high to low: top-down. From one area to the same area: lateral. X does not link with Z directly.  ... 
arXiv:2106.13233v2 fatcat:iamjyd4purgtlbdaqdymajbseu

Neuromodulated Goal-Driven Perception in Uncertain Domains [article]

Xinyun Zou, Soheil Kolouri, Praveen K. Pilly, Jeffrey L. Krichmar
2019 arXiv   pre-print
In this paper, contrastive excitation backprop (c-EB) was used in a goal-driven perception task with pairs of noisy MNIST digits, where the system had to increase attention to one of the two digits corresponding  ...  In uncertain domains, the goals are often unknown and need to be predicted by the organism or system.  ...  Acknowledgements This material is based upon work supported by the United States Air Force and DARPA under Contract No. FA8750-18-C-0103.  ... 
arXiv:1903.00068v1 fatcat:jzxl6nr4qndrbjlgl6folsvh5u

Dendritic error backpropagation in deep cortical microcircuits [article]

João Sacramento and Rui Ponte Costa and Yoshua Bengio and Walter Senn
2017 arXiv   pre-print
In this model synaptic learning is driven by a local dendritic prediction error that arises from a failure to predict the top-down input given the bottom-up activities.  ...  Finally, complementing this cortical circuit with a disinhibitory mechanism enables attention-like stimulus denoising and generation.  ...  This work has been supported by the Swiss National Science Foundation (grant 310030L-156863, WS) and the Human Brain Project.  ... 
arXiv:1801.00062v1 fatcat:uruwi5mq3bgxdl7fgeokof55dm

Backprop-Free Reinforcement Learning with Active Neural Generative Coding [article]

Alexander Ororbia, Ankur Mali
2021 arXiv   pre-print
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.  ...  In this work, we propose active neural generative coding, a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.  ...  Generalizing to Active Neural Coding Given the definition of the NGC building block in the above section, we now turn our attention to the generalization that incorporates actions.  ... 
arXiv:2107.07046v2 fatcat:o43vhakmsvbbrg4vldbfscbc4i

There and Back Again: Revisiting Backpropagation Saliency Methods

Sylvestre-Alvise Rebuffi, Ruth Fong, Xu Ji, Andrea Vedaldi
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Saliency methods seek to explain the predictions of a model by producing an importance map across each input sample.  ...  Acknowledgments This work is supported by Mathworks/DTA, Open Philanthropy, EPSRC AIMS CDT and ERC 638009-IDIU.  ...  Top-down neural attention by excitation backprop. In Proc. ECCV, 2016. 1, 2, 5, 11 [41] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba.  ... 
doi:10.1109/cvpr42600.2020.00886 dblp:conf/cvpr/RebuffiFJV20 fatcat:rtqsyvhd5vg6bmo6u6zgmwgnfi

Neural Attention Models in Deep Learning: Survey and Taxonomy [article]

Alana Santana, Esther Colombini
2021 arXiv   pre-print
From the theoretical standpoint of attention, this survey provides a critical analysis of major neural attention models.  ...  Attention is a state of arousal capable of dealing with limited processing bottlenecks in human beings by focusing selectively on one piece of information while ignoring other perceptible information.  ...  Sclaroff, “Top- “Heterogeneous graph attention network,” in The World Wide Web down neural attention by excitation backprop,” International Journal Conference, 2019, pp. 2022–2032  ... 
arXiv:2112.05909v1 fatcat:qk2gljrl2rdyfbxw62n5cu6hzu

A brain basis of dynamical intelligence for AI and computational neuroscience [article]

Joseph D. Monaco, Kanaka Rajan, Grace M. Hwang
2021 arXiv   pre-print
This article was inspired by our symposium on dynamical neuroscience and machine learning at the 6th Annual US/NIH BRAIN Initiative Investigators Meeting.  ...  The deep neural nets of modern artificial intelligence (AI) have not achieved defining features of biological intelligence, including abstraction, causal learning, and energy-efficiency.  ...  Thus, global update rules like backprop have only recently renewed theoretical attention in neuroscience 39 .  ... 
arXiv:2105.07284v2 fatcat:ble5h45pk5fczn72dwco2m3rkm
« Previous Showing results 1 — 15 out of 154 results