Filters








15,876 Hits in 4.5 sec

Parallel neural hardware: the time is right

Ulrich Rückert, Erzsébet Merényi
2012 The European Symposium on Artificial Neural Networks  
It seems obvious that the massively parallel computations inherent in artificial neural networks (ANNs) can only be realized by massively parallel hardware.  ...  Within this paper we will discuss some key issues for parallel ANN implementation on these standard devices compared to special purpose ANN implementations.  ...  Various techniques for simulating large ANNs on parallel supercomputers or computer networks are known which can be reused for mapping ANNs to many-core architectures.  ... 
dblp:conf/esann/RuckertM12 fatcat:4v4gyxgzy5hwfooavbxumx45cy

GPU implementation of spiking neural networks for color image segmentation

Ermai Xie, Martin McGinnity, QingXiang Wu, Jianyong Cai, Rontai Cai
2011 2011 4th International Congress on Image and Signal Processing  
However, it is time-consuming to simulate a large scale of spiking neurons in the networks using CPU programming.  ...  Spiking neural networks inherit intrinsically parallel mechanism from biological system. A massively parallel implementation technology is required to simulate them.  ...  Therefore, the parallel architecture can be used to implement large scale of spiking neural networks.  ... 
doi:10.1109/cisp.2011.6100451 fatcat:fyxbvw7uxzbijejmufnfza2xcu

Minitaur, an Event-Driven FPGA-Based Spiking Network Accelerator

Daniel Neil, Shih-Chii Liu
2014 IEEE Transactions on Very Large Scale Integration (vlsi) Systems  
Spiking neural networks running on an appropriate hardware platform can allow asynchronous and massively parallel energy-efficient processing [2].  ...  However, simulating large-scale DBNs has large computational demands, which means high energy requirements and long latencies, and thus limits their use in real-time applications for mobile or robotic  ...  Spiking neural networks running on an appropriate hardware platform can allow asynchronous and massively parallel energy-efficient processing [2] .  ... 
doi:10.1109/tvlsi.2013.2294916 fatcat:juhmivazyrfw5jpj35jnw5giky

Live demonstration: Handwritten digit recognition using spiking deep belief networks on SpiNNaker

Evangelos Stromatias, Daniel Neil, Francesco Galluppi, Michael Pfeiffer, Shih-Chii Liu, Steve Furber
2015 2015 IEEE International Symposium on Circuits and Systems (ISCAS)  
Spiking neural networks running on an appropriate hardware platform can allow asynchronous and massively parallel energy-efficient processing [2].  ...  However, simulating large-scale DBNs has large computational demands, which means high energy requirements and long latencies, and thus limits their use in real-time applications for mobile or robotic  ...  Spiking neural networks running on an appropriate hardware platform can allow asynchronous and massively parallel energy-efficient processing [2] .  ... 
doi:10.1109/iscas.2015.7169034 dblp:conf/iscas/StromatiasNGPLF15 fatcat:xsprrdj4unbb3ae6pafoh2wffm

Towards reverse engineering the brain: Modeling abstractions and simulation frameworks

Jayram Moorkanikara Nageswaran, Micah Richert, Nikil Dutt, Jeffrey L Krichmar
2010 2010 18th IEEE/IFIP International Conference on VLSI and System-on-Chip  
Recent advances in low-cost multiprocessor architectures make it possible to build large-scale spiking network simulators.  ...  In this paper we review modeling abstractions for neural circuits and frameworks for modeling, simulating and analyzing spiking neural networks.  ...  We also thank Professors Alex Nicolau and Alex Veidenbaum for discussions on architectural platforms and parallelization.  ... 
doi:10.1109/vlsisoc.2010.5642630 dblp:conf/vlsi/NageswaranRDK10 fatcat:6uzx3uiec5doxajvyemusbrg54

Scalability and Optimization Strategies for GPU Enhanced Neural Networks (GeNN) [article]

Naresh Balaji, Esin Yavuz, Thomas Nowotny
2014 arXiv   pre-print
Simulation of spiking neural networks has been traditionally done on high-performance supercomputers or large-scale clusters.  ...  Utilizing the parallel nature of neural network computation algorithms, GeNN (GPU Enhanced Neural Network) provides a simulation environment that performs on General Purpose NVIDIA GPUs with a code generation  ...  Naresh Balaji would like to thank his family for their support and encouragement.  ... 
arXiv:1412.0595v1 fatcat:vl6cr7ovubfxdjf7wquvcioml4

Parallelizing Training of Deep Generative Models on Massive Scientific Datasets [article]

Sam Ade Jacobs, Brian Van Essen, David Hysom, Jae-Seung Yeom, Tim Moon, Rushil Anirudh, Jayaraman J. Thiagaranjan, Shusen Liu, Peer-Timo Bremer, Jim Gaffney, Tom Benson, Peter Robinson (+2 others)
2019 arXiv   pre-print
Training deep neural networks on large scientific data is a challenging task that requires enormous compute power, especially if no pre-trained models exist to initialize the process.  ...  Our approach combines an HPC workflow and extends LBANN with optimized data ingestion and the new tournament-style training algorithm to produce a scalable neural network architecture using a CORAL-class  ...  In addition to our work on large scale learning, developing a cognitive simulation capability requires innovation in scientific workflows and neural network architectures.  ... 
arXiv:1910.02270v1 fatcat:ib26rk5qcrbw7mz3taymmudgrq

SpiNNaker: A multi-core System-on-Chip for massively-parallel neural net simulation

Eustace Painkras, Luis A. Plana, Jim Garside, Steve Temple, Simon Davidson, Jeffrey Pepper, David Clark, Cameron Patterson, Steve Furber
2012 Proceedings of the IEEE 2012 Custom Integrated Circuits Conference  
SpiNNaker is a massively-parallel computer system designed to model up to a billion spiking neurons in real time.  ...  The modelling of large systems of spiking neurons is computationally very demanding in terms of processing power and communication.  ...  INTRODUCTION SpiNNaker [1] is a biologically-inspired, massively parallel computing architecture designed to facilitate the modelling and simulation of large-scale spiking neural networks of up to a  ... 
doi:10.1109/cicc.2012.6330636 dblp:conf/cicc/PainkrasPGTDPCPF12 fatcat:cm5i4u3wa5ghffa52nxeqrynwa

A comparative study of GPU programming models and architectures using neural networks

Vivek K. Pallipuram, Mohammad Bhuiyan, Melissa C. Smith
2011 Journal of Supercomputing  
Spiking Neural Network (SNN) models have been widely employed to simulate the mammalian brain, capturing its functionality and inference capabilities.  ...  There has been a strong interest in the neuroscience community to model a mammalian brain in order to study its architecture and functional principles.  ...  This allows for large scale SNN simulations in order to simulate neurons in near real-time.  ... 
doi:10.1007/s11227-011-0631-3 fatcat:wq6xwp5panbnzenl7z2cib2gmi

Author index

2006 2006 IEEE International Conference on Cluster Computing  
Analytical Network Modeling of Heterogeneous Large-Scale Cluster Systems Akbari, Mohammad K.  ...  Analytical Network Modeling of Heterogeneous Large-Scale Cluster Systems Alonso, Pedro A Parallel Algorithm for the Solution of the Deconvolution Problem on Heterogeneous Networks Hagimont, Daniel Autonomic  ... 
doi:10.1109/clustr.2006.311921 fatcat:vmbbimypuze7ncjqfonu4po5l4

Massive parallelism for artificial intelligence (extended abstract)

Luc Steels
1987 Microprocessing and Microprogramming  
attempting to explore massive parallelism for Artificial Intelligence.  ...  It was already clear in the early seventies that parallelism was going to be necessary for Artificial Intelligence. Initially research concentrated on small scale parallelism.  ... 
doi:10.1016/0165-6074(87)90011-1 fatcat:r7ag2bewcvhajeobqtovwhdcau

A Framework for Embedded Hypercube Interconnection Networks: Based on Neural Network Approach

Mohd. KalamuddinAhmad, Mohd. Husain, A.A. Zilli
2015 International Journal of Computer Applications  
In this paper we first show that n dimensional hypercube can be embedded in layer neural layer network such that for any node of hypercube, its neighboring nodes of other layer are evenly partition into  ...  This paper is concerned with routing of data in an embedded hypercube interconnection using the approach based on neural net architecture.  ...  Direct network have become popular architecture for constructing massively parallel computers because they scale well, that is the number of nodes in the system increases as well as communication bandwidth  ... 
doi:10.5120/21201-3873 fatcat:2qprdoc47jbntmzguipstvc7ke

A Massively Parallel Digital Learning Processor

Hans Peter Graf, Srihari Cadambi, Igor Durdanovic, Venkata Jakkula, Murugan Sankaradass, Eric Cosatto, Srimat T. Chakradhar
2008 Neural Information Processing Systems  
This massively parallel architecture is particularly attractive for embedded applications, where low power dissipation is critical. 1 e.g.  ...  We present a new, massively parallel architecture for accelerating machine learning algorithms, based on arrays of vector processing elements (VPEs) with variable-resolution arithmetic.  ...  Beside digital processors a large number of analog circuits were built, emulating neural network structures.  ... 
dblp:conf/nips/GrafCDJSCC08 fatcat:edqizph455bv3khutz7yfl5mdu

Distributed configuration of massively-parallel simulation on SpiNNaker neuromorphic hardware

Thomas Sharp, Cameron Patterson, Steve Furber
2011 The 2011 International Joint Conference on Neural Networks  
SpiNNaker is a massively-parallel neuromorphic computing architecture designed to model very large, biologically plausible spiking neural networks in real-time.  ...  The architecture is designed for dynamic reconfiguration and optimised for transmission of neural activity data, which presents a challenge for machine configuration, program loading and simulation monitoring  ...  SpiNNaker is a massively-parallel neuromorphic architecture (figure 1) designed to model very large, biologically plausible spiking neural networks in real-time.  ... 
doi:10.1109/ijcnn.2011.6033346 dblp:conf/ijcnn/SharpPF11 fatcat:svxmxlmnz5dntiow6esdphcisy

Efficient Simulation of Biological Neural Networks on Massively Parallel Supercomputers with Hypercube Architecture

Ernst Niebur, Dean Brettle
1993 Neural Information Processing Systems  
We present a neural network simulation which we implemented on the massively parallel Connection Machine 2.  ...  We simulate neural networks of 16,384 neurons coupled by about 1000 synapses per neuron, and estimate the performance for much larger systems.  ...  Worgotter who provided us with the code for generating the connections, and G. Holt for his retina simulator. Discussions with C. Koch and F. W orgotter were very helpful. We would like to thank C.  ... 
dblp:conf/nips/NieburB93 fatcat:45n6tt447nge5cfcuw32cqvimu
« Previous Showing results 1 — 15 out of 15,876 results