Filters








575 Hits in 4.9 sec

Sparse Coding Using the Locally Competitive Algorithm on the TrueNorth Neurosynaptic System

Kaitlin L. Fair, Daniel R. Mendat, Andreas G. Andreou, Christopher J. Rozell, Justin Romberg, David V. Anderson
2019 Frontiers in Neuroscience  
We discuss data structures and representation as well as the architecture of functional processing units that perform non-linear threshold, vector-matrix multiplication.  ...  Experimental results with the LCA algorithm using the limited precision, fixed-point arithmetic on TrueNorth compare favorably with results using floating-point computations on a general purpose computer  ...  AUTHOR CONTRIBUTIONS KF independently programmed all computational units with the exception of the vector-matrix multiplication.  ... 
doi:10.3389/fnins.2019.00754 pmid:31396039 pmcid:PMC6664083 fatcat:pewua7di7nbc7e6br5gyyl2iji

Brain-Inspired Learning on Neuromorphic Substrates

Friedemann Zenke, Emre O. Neftci
2021 Proceedings of the IEEE  
Furthermore, we motivate a sparse approximation based on block-diagonal Jacobians, which reduces the algorithm's computational complexity, diminishes the nonlocal information requirements, and empirically  ...  However, training on neuromorphic substrates creates significant challenges due to the offline character and the required nonlocal computations of gradient-based learning algorithms.  ...  Thus, from an implementation standpoint, sparse connectivity matrices are preferable on neuromorphic hardware.  ... 
doi:10.1109/jproc.2020.3045625 fatcat:pelkbpbg5jg7pjyvkvtpgrt2su

Brain-Inspired Learning on Neuromorphic Substrates [article]

Friedemann Zenke, Emre O. Neftci
2020 arXiv   pre-print
Further, we motivate a sparse approximation based on block-diagonal Jacobians, which reduces the algorithm's computational complexity, diminishes the non-local information requirements, and empirically  ...  However, training on neuromorphic substrates creates significant challenges due to the offline character and the required non-local computations of gradient-based learning algorithms.  ...  [9, 101] trained a spiking neuron, the Tempotron, as a binary classifier on input spike trains with a sparse temporal code.  ... 
arXiv:2010.11931v1 fatcat:e7bwgrmynvgmfkuordiqb3zusq

Neuromorphic Nearest-Neighbor Search Using Intel's Pohoiki Springs [article]

E. Paxon Frady, Garrick Orchard, David Florey, Nabil Imam, Ruokun Liu, Joyesh Mishra, Jonathan Tse, Andreas Wild, Friedrich T. Sommer, Mike Davies
2020 arXiv   pre-print
Neuromorphic computing applies insights from neuroscience to uncover innovations in computing technology.  ...  Compared to state-of-the-art conventional CPU-based implementations, we achieve superior latency, index build time, and energy efficiency when evaluated on several standard datasets containing over 1 million  ...  Our simple approximate algorithm computes and searches the matrix-vector product (1) on neuromorphic hardware.  ... 
arXiv:2004.12691v1 fatcat:2tnli7asl5etnmvts34f7m65uu

Field-Programmable Crossbar Array (FPCA) for Reconfigurable Computing [article]

Mohammed A. Zidan, YeonJoo Jeong, Jong Hong Shin, Chao Du, Zhengya Zhang, Wei D. Lu
2017 arXiv   pre-print
The system can be tailored to achieve maximal energy efficiency based on the data flow by dynamically allocating the basic computing fabric for storage, arithmetic, and analog computing including neuromorphic  ...  However, both the CMOS scaling and the classical computer architecture are approaching fundamental and practical limits, and new computing architectures based on emerging devices, such as resistive random-access  ...  The vector-vector multiplication algorithm is given below: This algorithm can be extended to a vector-matrix multiplication as illustrated in Figure 6b , where the vector-matrix multiplication can be  ... 
arXiv:1612.02913v4 fatcat:pg4gcagvkjek5callja5dy4xoq

Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype

Chen Liu, Guillaume Bellec, Bernhard Vogginger, David Kappel, Johannes Partzsch, Felix Neumärker, Sebastian Höppner, Wolfgang Maass, Steve B. Furber, Robert Legenstein, Christian G. Mayr
2018 Frontiers in Neuroscience  
On the handwritten digits dataset MNIST, this extremely sparse network achieves 96.6% classification accuracy at convergence.  ...  Utilizing the multi-processor feature of the SpiNNaker system, we found very good scaling in terms of computation time, per-core memory consumption, and energy constraints.  ...  To perform the vector-matrix multiplication in parallel, we decompose the input vector and the matrix into multiple sub-vectors and submatrices and map them into different computation nodes for simultaneous  ... 
doi:10.3389/fnins.2018.00840 pmid:30505263 pmcid:PMC6250847 fatcat:exackg5sbrdstnjqohhnagxmlm

Quantum Annealing Based Binary Compressive Sensing with Matrix Uncertainty [article]

Ramin Ayanzadeh, Seyedahmad Mousavi, Milton Halem, Tim Finin
2019 arXiv   pre-print
quantum computers (in general), CMOS annealers, optical parametric oscillators, and neuromorphic computing.  ...  Our approach formulates an Ising model whose ground state represents a sparse solution for the binary compressive sensing problem and then employs an alternating minimization scheme to tackle the binary  ...  We would like to thank the D-Wave Systems management team for access to the 2000Q quantum computer at Burnaby, Canada.  ... 
arXiv:1901.00088v1 fatcat:giafok4o6jdy7oioclvchhumt4

Real-time Tracking Based on Neuromrophic Vision [article]

Hongmin Li, Pei Jing, Guoqi Li
2015 arXiv   pre-print
Real-time tracking is an important problem in computer vision in which most methods are based on the conventional cameras.  ...  Our method demonstrates that the computer vision methods could be used for the neuromorphic vision processing and we can realize fast real-time tracking using neuromorphic vision sensors compare to the  ...  We employ a sparse random matrix M ∈ × to project the feature vector x to a low-dimensional space ∈ based on compressive sensing theory = M (4) where ≪ .  ... 
arXiv:1510.05275v1 fatcat:l4advmyxt5efze5jjjxwzb2gue

Low‐Power Computing with Neuromorphic Engineering

Dingbang Liu, Hao Yu, Yang Chai
2020 Advanced Intelligent Systems  
The parallel digitizing divide splits the vector matrix multiplication to multiple inner-product operations of two vectors.  ...  Neuromorphic computation (e.g., vector matrix multiplication, etc.) requires device characteristics with linear and symmetric tuning, long retention, low stochastic behavior (blind updates), and low energy  ...  Keywords in-memory computing, low power neuromorphic computing, nonvolatile memories, synaptic devices  ... 
doi:10.1002/aisy.202000150 fatcat:wxbtla4zd5a6ho42xmpix4gv7m

Sparse coding with memristor networks

Patrick M. Sheridan, Fuxi Cai, Chao Du, Wen Ma, Zhengya Zhang, Wei D. Lu
2017 Nature Nanotechnology  
Sparse representation of information provides a powerful means to perform feature extraction on high-dimensional data and is of broad interest for applications in signal processing, computer vision, object  ...  When constructed into a crossbar form, memristor networks offer the desired density and connectivity that are required for hardware implementation of neuromorphic computing systems [13] [14] [15] .  ...  By doing so, the matrix-matrix multiplication operation D T D in equation (2a) is reduced to two sequential vector-matrix multiplication operations (one used to calculatex = Da T and the other used to  ... 
doi:10.1038/nnano.2017.83 pmid:28530717 fatcat:yh4wz7empre5nckqjdjm554uxm

Power System Disturbance Classification with Online Event-Driven Neuromorphic Computing [article]

Kaveri Mahapatra, Sen Lu, Abhronil Sengupta, Nilanjan Ray Chaudhuri
2020 arXiv   pre-print
To solve this challenge without compromising accuracy, this paper presents a novel methodology based on event-driven neuromorphic computing architecture for classification of power system disturbances.  ...  Moreover, a QR decomposition-based selection technique is proposed to identify signals participating in the low rank subspace of multiple disturbance events.  ...  the lower dimensional signal subspace. • while h ≤ H train 1) Apply economy QR decomposition on Y h to obtain permutation matrix P , orthonormal matrix Q, R. 2) Find the permutation vector p from P containing  ... 
arXiv:2006.06682v3 fatcat:rfkhz5fmbze4lpcvu6jkwjadjq

Vector Symbolic Architectures as a Computing Framework for Nanoscale Hardware [article]

Denis Kleyko, Mike Davies, E. Paxon Frady, Pentti Kanerva, Spencer J. Kent, Bruno A. Olshausen, Evgeny Osipov, Jan M. Rabaey, Dmitri A. Rachkovskij, Abbas Rahimi, Friedrich T. Sommer
2021 arXiv   pre-print
We demonstrate in this article that the ring-like algebraic structure of Vector Symbolic Architectures offers simple but powerful operations on high-dimensional vectors that can support all data structures  ...  This paper serves as a reference for computer architects by illustrating techniques and philosophy of VSAs for distributed computing and relevance to emerging computing hardware, such as neuromorphic computing  ...  Concerning VSA with Sparse Block-Codes, in particular with complex-valued sparse vectors, they seem to be the most amenable for implementations on neuromorphic and coupled oscillator hardware (see Section  ... 
arXiv:2106.05268v1 fatcat:7atbowwq7jcrrmzkiewfkpd6mu

Spatially Arranged Sparse Recurrent Neural Networks for Energy Efficient Associative Memory

Gouhei Tanaka, Ryosho Nakane, Tomoya Takeuchi, Toshiyuki Yamane, Daiju Nakano, Yasunao Katayama, Akira Hirose
2019 IEEE Transactions on Neural Networks and Learning Systems  
In the first approach following classical methods, we focus on sparse modular network structures inspired by biological brain networks and examine their storage capacity under an iterative learning rule  ...  The development of hardware neural networks, including neuromorphic hardware, has been accelerated over the past few years.  ...  a new computing paradigm including neuromorphic computing.  ... 
doi:10.1109/tnnls.2019.2899344 pmid:30892239 fatcat:idjzdwe665aodl7wdpz5jqftx4

A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines [article]

Michael R. Smith, Aaron J. Hill, Kristofor D. Carlson, Craig M. Vineyard, Jonathon Donaldson, David R. Follett, Pamela L. Follett, John H. Naegle, Conrad D. James, James B. Aimone
2017 arXiv   pre-print
This poses a problem as the primary computational bottleneck for neural networks is the vector-matrix multiply when inputs are multiplied by the neural network weights.  ...  Additionally, synapses in biological neural networks are not binary connections, but exhibit a nonlinear response function as neurotransmitters are emitted and diffuse between neurons.  ...  One of the contributing factors to the computational complexity of neural networks is the vector-matrix multiplications (the input vector multiplied by the synapse or weight matrix).  ... 
arXiv:1704.08306v1 fatcat:mvg6m7g5vnddtftd24kyawo56m

Robust computation with rhythmic spike patterns

E. Paxon Frady, Friedrich T. Sommer
2019 Proceedings of the National Academy of Sciences of the United States of America  
in emerging neuromorphic devices.  ...  Building on Hebbian neural associative memories, like Hopfield networks, we first propose threshold phasor associative memory (TPAM) networks.  ...  Neuromorphic Computing.  ... 
doi:10.1073/pnas.1902653116 pmid:31431524 pmcid:PMC6731666 fatcat:urhs462ppzdvnfjijhn7cchdea
« Previous Showing results 1 — 15 out of 575 results