A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Run-time Mapping of Spiking Neural Networks to Neuromorphic Hardware
[article]
2020
arXiv
pre-print
In this paper, we propose a design methodology to partition and map the neurons and synapses of online learning SNN-based applications to neuromorphic architectures at run-time. Our design methodology operates in two steps – step 1 is a layer-wise greedy approach to partition SNNs into clusters of neurons and synapses incorporating the constraints of the neuromorphic architecture, and step 2 is a hill-climbing optimization algorithm that minimizes the total spikes communicated between clusters,
arXiv:2006.06777v1
fatcat:bwncem4t55fq3eugwzg2mxawz4
more »
... improving energy consumption on the shared interconnect of the architecture. We conduct experiments to evaluate the feasibility of our algorithm using synthetic and realistic SNN-based applications. We demonstrate that our algorithm reduces SNN mapping time by an average 780x compared to a state-of-the-art design-time based SNN partitioning approach with only 6.25% lower solution quality.
Dynamic Reliability Management in Neuromorphic Computing
[article]
2021
arXiv
pre-print
Neuromorphic computing systems uses non-volatile memory (NVM) to implement high-density and low-energy synaptic storage. Elevated voltages and currents needed to operate NVMs cause aging of CMOS-based transistors in each neuron and synapse circuit in the hardware, drifting the transistor's parameters from their nominal values. Aggressive device scaling increases power density and temperature, which accelerates the aging, challenging the reliable operation of neuromorphic systems. Existing
arXiv:2105.02038v1
fatcat:xmtxzzp6iferrm7ov3uks5rcsa
more »
... ility-oriented techniques periodically de-stress all neuron and synapse circuits in the hardware at fixed intervals, assuming worst-case operating conditions, without actually tracking their aging at run time. To de-stress these circuits, normal operation must be interrupted, which introduces latency in spike generation and propagation, impacting the inter-spike interval and hence, performance, e.g., accuracy. We propose a new architectural technique to mitigate the aging-related reliability problems in neuromorphic systems, by designing an intelligent run-time manager (NCRTM), which dynamically destresses neuron and synapse circuits in response to the short-term aging in their CMOS transistors during the execution of machine learning workloads, with the objective of meeting a reliability target. NCRTM de-stresses these circuits only when it is absolutely necessary to do so, otherwise reducing the performance impact by scheduling de-stress operations off the critical path. We evaluate NCRTM with state-of-the-art machine learning workloads on a neuromorphic hardware. Our results demonstrate that NCRTM significantly improves the reliability of neuromorphic hardware, with marginal impact on performance.
On the Role of System Software in Energy Management of Neuromorphic Computing
[article]
2021
arXiv
pre-print
Neuromorphic computing systems such as DYNAPs and Loihi have recently been introduced to the computing community to improve performance and energy efficiency of machine learning programs, especially those that are implemented using Spiking Neural Network (SNN). The role of a system software for neuromorphic systems is to cluster a large machine learning model (e.g., with many neurons and synapses) and map these clusters to the computing resources of the hardware. In this work, we formulate the
arXiv:2103.12231v1
fatcat:ltcroj7vynh4db55m6tzwmtm4y
more »
... nergy consumption of a neuromorphic hardware, considering the power consumed by neurons and synapses, and the energy consumed in communicating spikes on the interconnect. Based on such formulation, we first evaluate the role of a system software in managing the energy consumption of neuromorphic systems. Next, we formulate a simple heuristic-based mapping approach to place the neurons and synapses onto the computing resources to reduce energy consumption. We evaluate our approach with 10 machine learning applications and demonstrate that the proposed mapping approach leads to a significant reduction of energy consumption of neuromorphic computing systems.
Compiling Spiking Neural Networks to Neuromorphic Hardware
[article]
2020
arXiv
pre-print
Machine learning applications that are implemented with spike-based computation model, e.g., Spiking Neural Network (SNN), have a great potential to lower the energy consumption when they are executed on a neuromorphic hardware. However, compiling and mapping an SNN to the hardware is challenging, especially when compute and storage resources of the hardware (viz. crossbar) need to be shared among the neurons and synapses of the SNN. We propose an approach to analyze and compile SNNs on a
arXiv:2004.03717v1
fatcat:nxnmzwycvzdafbulsk55skpyyu
more »
... ce-constrained neuromorphic hardware, providing guarantee on key performance metrics such as execution time and throughput. Our approach makes the following three key contributions. First, we propose a greedy technique to partition an SNN into clusters of neurons and synapses such that each cluster can fit on to the resources of a crossbar. Second, we exploit the rich semantics and expressiveness of Synchronous Dataflow Graphs (SDFGs) to represent a clustered SNN and analyze its performance using Max-Plus Algebra, considering the available compute and storage capacities, buffer sizes, and communication bandwidth. Third, we propose a self-timed execution-based fast technique to compile and admit SNN-based applications to a neuromorphic hardware at run-time, adapting dynamically to the available resources on the hardware. We evaluate our approach with standard SNN-based applications and demonstrate a significant performance improvement compared to current practices.
DFSynthesizer: Dataflow-based Synthesis of Spiking Neural Networks to Neuromorphic Hardware
[article]
2021
arXiv
pre-print
Spiking Neural Networks (SNN) are an emerging computation model, which uses event-driven activation and bio-inspired learning algorithms. SNN-based machine-learning programs are typically executed on tile- based neuromorphic hardware platforms, where each tile consists of a computation unit called crossbar, which maps neurons and synapses of the program. However, synthesizing such programs on an off-the-shelf neuromorphic hardware is challenging. This is because of the inherent resource and
arXiv:2108.02023v1
fatcat:5yuttyxivnhb7klhhv7blniwhm
more »
... ncy limitations of the hardware, which impact both model performance, e.g., accuracy, and hardware performance, e.g., throughput. We propose DFSynthesizer, an end-to-end framework for synthesizing SNN-based machine learning programs to neuromorphic hardware. The proposed framework works in four steps. First, it analyzes a machine-learning program and generates SNN workload using representative data. Second, it partitions the SNN workload and generates clusters that fit on crossbars of the target neuromorphic hardware. Third, it exploits the rich semantics of Synchronous Dataflow Graph (SDFG) to represent a clustered SNN program, allowing for performance analysis in terms of key hardware constraints such as number of crossbars, dimension of each crossbar, buffer space on tiles, and tile communication bandwidth. Finally, it uses a novel scheduling algorithm to execute clusters on crossbars of the hardware, guaranteeing hardware performance. We evaluate DFSynthesizer with 10 commonly used machine-learning programs. Our results demonstrate that DFSynthesizer provides much tighter performance guarantee compared to current mapping approaches.
Mapping Spiking Neural Networks to Neuromorphic Hardware
[article]
2019
arXiv
pre-print
Neuromorphic hardware platforms implement biological neurons and synapses to execute spiking neural networks (SNNs) in an energy-efficient manner. We present SpiNeMap, a design methodology to map SNNs to crossbar-based neuromorphic hardware, minimizing spike latency and energy consumption. SpiNeMap operates in two steps: SpiNeCluster and SpiNePlacer. SpiNeCluster is a heuristic-based clustering technique to partition SNNs into clusters of synapses, where intracluster local synapses are mapped
arXiv:1909.01843v1
fatcat:w4kpthbfcvbjpiw7oue7bxcpyq
more »
... thin crossbars of the hardware and inter-cluster global synapses are mapped to the shared interconnect. SpiNeCluster minimizes the number of spikes on global synapses, which reduces spike congestion on the shared interconnect, improving application performance. SpiNePlacer then finds the best placement of local and global synapses on the hardware using a meta-heuristic-based approach to minimize energy consumption and spike latency. We evaluate SpiNeMap using synthetic and realistic SNNs on the DynapSE neuromorphic hardware. We show that SpiNeMap reduces average energy consumption by 45% and average spike latency by 21%, compared to state-of-the-art techniques.
Enabling Resource-Aware Mapping of Spiking Neural Networks via Spatial Decomposition
[article]
2020
arXiv
pre-print
Balaji, S. Song, A. Das, J. Shackleford, and N. ...
arXiv:2009.09298v1
fatcat:6xphw47ux5dx3pmju25drphnm4
Design of Many-Core Big Little μBrain for Energy-Efficient Embedded Neuromorphic Computing
[article]
2021
arXiv
pre-print
As spiking-based deep learning inference applications are increasing in embedded systems, these systems tend to integrate neuromorphic accelerators such as μBrain to improve energy efficiency. We propose a μBrain-based scalable many-core neuromorphic hardware design to accelerate the computations of spiking deep convolutional neural networks (SDCNNs). To increase energy efficiency, cores are designed to be heterogeneous in terms of their neuron and synapse capacity (big cores have higher
arXiv:2111.11838v1
fatcat:hd5hzgth5zhp7nhy5y2vcoajmu
more »
... y than the little ones), and they are interconnected using a parallel segmented bus interconnect, which leads to lower latency and energy compared to a traditional mesh-based Network-on-Chip (NoC). We propose a system software framework called SentryOS to map SDCNN inference applications to the proposed design. SentryOS consists of a compiler and a run-time manager. The compiler compiles an SDCNN application into subnetworks by exploiting the internal architecture of big and little μBrain cores. The run-time manager schedules these sub-networks onto cores and pipeline their execution to improve throughput. We evaluate the proposed big little many-core neuromorphic design and the system software framework with five commonlyused SDCNN inference applications and show that the proposed solution reduces energy (between 37 (between 9 36 neuromorphic accelerators.
A Framework to Explore Workload-Specific Performance and Lifetime Trade-offs in Neuromorphic Computing
[article]
2019
arXiv
pre-print
Balaji, S. Song, A. Das, N. Kandasamy are with Drexel University, Philadelphia, PA, USA E-mail:anup.das@drexel.edu. • N. Dutt and J. ...
arXiv:1911.00548v1
fatcat:7nm5hgxe7fav7cumj56wdshld4
Design Methodology for Embedded Approximate Artificial Neural Networks
2019
Proceedings of the 2019 on Great Lakes Symposium on VLSI - GLSVLSI '19
Artificial neural networks (ANNs) have demonstrated significant promise while implementing recognition and classification applications. The implementation of pre-trained ANNs on embedded systems requires representation of data and design parameters in low-precision fixed-point formats; which often requires retraining of the network. For such implementations, the multiply-accumulate operation is the main reason for resultant high resource and energy requirements. To address these challenges, we
doi:10.1145/3299874.3319490
dblp:conf/glvlsi/BalajiU0019
fatcat:spt3luew3zbllj355jyrxynjq4
more »
... resent Rox-ANN, a design methodology for implementing ANNs using processing elements (PEs) designed with low-precision fixed-point numbers and high performance and reduced-area approximate multipliers on FPGAs. The trained design parameters of the ANN are analyzed and clustered to optimize the total number of approximate multipliers required in the design. With our methodology, we achieve insignificant loss in application accuracy. We evaluated the design using a LeNet based implementation of the MNIST digit recognition application. The results show a 65.6%, 55.1% and 18.9% reduction in area, energy consumption and latency for a PE using 8-bit precision weights and activations and approximate arithmetic units, when compared to 16-bit full precision, accurate arithmetic PEs.
NeuroXplorer 1.0: An Extensible Framework for Architectural Exploration with Spiking Neural Networks
[article]
2021
arXiv
pre-print
Recently, both industry and academia have proposed many different neuromorphic architectures to execute applications that are designed with Spiking Neural Network (SNN). Consequently, there is a growing need for an extensible simulation framework that can perform architectural explorations with SNNs, including both platform-based design of today's hardware, and hardware-software co-design and design-technology co-optimization of the future. We present NeuroXplorer, a fast and extensible
arXiv:2105.01795v1
fatcat:yztiegjepvho5ecztv2akaj4vy
more »
... k that is based on a generalized template for modeling a neuromorphic architecture that can be infused with the specific details of a given hardware and/or technology. NeuroXplorer can perform both low-level cycle-accurate architectural simulations and high-level analysis with data-flow abstractions. NeuroXplorer's optimization engine can incorporate hardware-oriented metrics such as energy, throughput, and latency, as well as SNN-oriented metrics such as inter-spike interval distortion and spike disorder, which directly impact SNN performance. We demonstrate the architectural exploration capabilities of NeuroXplorer through case studies with many state-of-the-art machine learning models.
PyCARL: A PyNN Interface for Hardware-Software Co-Simulation of Spiking Neural Network
[article]
2020
arXiv
pre-print
We present PyCARL, a PyNN-based common Python programming interface for hardware-software co-simulation of spiking neural network (SNN). Through PyCARL, we make the following two key contributions. First, we provide an interface of PyNN to CARLsim, a computationally-efficient, GPU-accelerated and biophysically-detailed SNN simulator. PyCARL facilitates joint development of machine learning models and code sharing between CARLsim and PyNN users, promoting an integrated and larger neuromorphic
arXiv:2003.09696v2
fatcat:k4oyr7r5srci5fylbbivw6kyom
more »
... munity. Second, we integrate cycle-accurate models of state-of-the-art neuromorphic hardware such as TrueNorth, Loihi, and DynapSE in PyCARL, to accurately model hardware latencies that delay spikes between communicating neurons and degrade performance. PyCARL allows users to analyze and optimize the performance difference between software-only simulation and hardware-software co-simulation of their machine learning models. We show that system designers can also use PyCARL to perform design-space exploration early in the product development stage, facilitating faster time-to-deployment of neuromorphic products. We evaluate the memory usage and simulation time of PyCARL using functionality tests, synthetic SNNs, and realistic applications. Our results demonstrate that for large SNNs, PyCARL does not lead to any significant overhead compared to CARLsim. We also use PyCARL to analyze these SNNs for a state-of-the-art neuromorphic hardware and demonstrate a significant performance deviation from software-only simulations. PyCARL allows to evaluate and minimize such differences early during model development.
List of Contents, Author Index, Reviewers and Editorial Board
2016
Tribology Online
Connie Wiita A Study on the Impact Wear Behaviors of 40Cr Steel 273-281 Jing Wang, Leilei Lu and Zhikuan Ji Article Modeling of Gas Lubricated Compliant Foil Bearings Using Pseudo Spectral Scheme 295-305 Balaji ...
Ayyappa Gundayya
76
Genjiro Hagino
366
Liying Han
121
Chen Han
147
Minoru Hanahashi
177
Rafidah Hasan
428
Alan Hase
201
Hiroaki Hasegawa
390
Hiromu Hashimoto
115
Harish Hirani
282
Adarsha ...
doi:10.2474/trol.10.iv
fatcat:b6ed3z4odjcijin6jnpfsxx63u
Village knowledge centers and the use of GIS-derived products in enhancing micro-level drought preparedness: A case study from South Central India
2007
2007 International Conference on Information and Communication Technologies and Development
The principal local partner is a community-based NGO called the Adarsha Mahila Samaikhya (AMS), which is a federation of village-level micro-credit groups in the Mandal. ...
doi:10.1109/ictd.2007.4937403
dblp:conf/ictd/KumarNRB07
fatcat:awx7ef3qxjg7jmkksxcfgh2ijq
Front Matter
2020
2020 International Joint Conference on Neural Networks (IJCNN)
Balaji, Prathyusha Adiraju, Hirak Kashyap, Anup Das, Jeffrey Krichmar, Nikil Dutt and Francky Catthoor Drexel University, United States; Stichting IMEC Nederland, Netherlands; University of California ...
#21471] Mattias Nilsson, Foteini Liwicki and Fredrik Sandin Lulea University of Technology, Sweden P1317 PyCARL: A PyNN Interface for Hardware-Software Co-Simulation of Spiking Neural Network [#20903] Adarsha ...
doi:10.1109/ijcnn48605.2020.9207579
fatcat:hptkppolhbfn7nz3yangesetpi
« Previous
Showing results 1 — 15 out of 17 results