Filters








62 Hits in 5.7 sec

Table of Contents

2022 IEEE Journal on Emerging and Selected Topics in Circuits and Systems  
Parasitic-Aware Modeling and Neural Network Training Scheme for Energy-Efficient Processing-in-Memory With Resistive Crossbar Array .......................................................... T.  ...  Moon, and J. H. Ko 408 A Heterogeneous In-Memory Computing Cluster for Flexible End-to-End Inference of Real-World Deep Neural Networks ..................... A. Garofalo, G. Ottavi, F. Conti, G.  ... 
doi:10.1109/jetcas.2022.3180655 fatcat:gdofl5q7nfdulhuhm7bf6ttdse

Technology Aware Training in Memristive Neuromorphic Systems for Nonideal Synaptic Crossbars

Indranil Chakraborty, Deboleena Roy, Kaushik Roy
2018 IEEE Transactions on Emerging Topics in Computational Intelligence  
network, and the neuronal functionality, in a fast and energy efficient manner.  ...  impact on the classification accuracy of a fully connected network (FCN) and convolutional neural network (CNN) trained with standard training algorithm.  ...  ACKNOWLEDGMENT The research was funded in part by the National Science Foundation, Center for Spintronics funded by DARPA and SRC, Intel Corporation, ONR MURI program, and Vannevar Bush Faculty Fellowship  ... 
doi:10.1109/tetci.2018.2829919 fatcat:qspwpberfbdvbbg7nwxrfktkk4

Defects Mitigation in Resistive Crossbars for Analog Vector Matrix Multiplication [article]

Fan Zhang, Miao Hu
2019 arXiv   pre-print
With storage and computation happening at the same place, computing in resistive crossbars minimizes data movement and avoids the memory bottleneck issue.  ...  It leads to ultra-high energy efficiency for data-intensive applications. However, defects in crossbars severely affect computing accuracy.  ...  For most cases, especially neural networks, the model parameters only need to map on crossbar once before inference.  ... 
arXiv:1912.07829v1 fatcat:eic46nib45fwramwa744yvajlq

In‐Memory Vector‐Matrix Multiplication in Monolithic Complementary Metal–Oxide–Semiconductor‐Memristor Integrated Circuits: Design Choices, Challenges, and Perspectives

Amirali Amirsoleimani, Fabien Alibart, Victor Yon, Jianxiong Xu, M. Reza Pazhouhandeh, Serge Ecoffey, Yann Beilliard, Roman Genov, Dominique Drouin
2020 Advanced Intelligent Systems  
Theoretically, this directly translates to a major boost in energy efficiency and computational throughput, but various practical challenges remain.  ...  In-memory computing has emerged as a prime candidate to eliminate this bottleneck by co-locating the memory and processing.  ...  Scalability challenges and RC network Elmore delay model for RS crossbar array.  ... 
doi:10.1002/aisy.202000115 fatcat:jbumwsdpwze33paahq5qul2cja

Resistive Crossbar-Aware Neural Network Design and Optimization

Muhammad Abdullah Hanif, Aditya Manglik, Muhammad Shafique
2020 IEEE Access  
CONCLUSION This work represents the first step in the direction of understanding and quantifying the existing challenges in resistive crossbar-aware neural network design and optimization.  ...  in area and energy efficiency.  ...  His research has a special focus on cross-layer analysis, modeling, design, and optimization of computing and memory systems.  ... 
doi:10.1109/access.2020.3045071 fatcat:vezdek5fe5c5hdemui2qqu3bba

Neural-PIM: Efficient Processing-In-Memory with Neural Approximation of Peripherals

Weidong Cao, Yilong Zhao, Adith Boloor, Yinhe Han, Xuan Zhang, Li Jiang
2021 IEEE transactions on computers  
We then leverage a neural approximation method to design both analog accumulation circuits (S+A) and quantization circuits (ADCs) with RRAM crossbar arrays in a highly-efficient manner.  ...  Processing-in-memory (PIM) architectures have demonstrated great potential in accelerating numerous deep learning tasks.  ...  Here, V in,0 , • • • , V in, In Step 4 , we leverage hardware-aware training techniques to find the feasible and robust weights for the trained NN model.  ... 
doi:10.1109/tc.2021.3122905 fatcat:dagen62uunhdvkl3un6cz2bedi

Evaluating complexity and resilience trade-offs in emerging memory inference machines [article]

Christopher H. Bennett, Ryan Dellana, T. Patrick Xiao, Ben Feinberg, Sapan Agarwal, Suma Cardwell, Matthew J. Marinella, William Severa, Brad Aimone
2020 arXiv   pre-print
In this work, we use realistic crossbar simulations to highlight that compact implementations of deep neural networks are unexpectedly susceptible to collapse from multiple system disturbances.  ...  Our work proposes a middle path towards high performance and strong resilience utilizing the Mosaics framework, and specifically by re-using synaptic connections in a recurrent neural network implementation  ...  Recently, long-short-term memory (LSTM) networks have been the most heavily considered for implementation with dense nonvolatile memory arrays [7] ; however, such schemes involve complex crossbar partioning  ... 
arXiv:2003.10396v1 fatcat:ehqfnisp75frlfvur2zrtyzvpa

A Survey of ReRAM-Based Architectures for Processing-In-Memory and Neural Networks

Sparsh Mittal
2018 Machine Learning and Knowledge Extraction  
Resistive random access memory (ReRAM) is a promising technology for efficiently architecting PIM- and NN-based accelerators due to its capabilities to work as both: High-density/low-energy storage and  ...  ), and especially neural network (NN)-based accelerators has grown significantly.  ...  networks [30] spiking neural networks [1, 31] and processing-in-memory [32] .  ... 
doi:10.3390/make1010005 dblp:journals/make/Mittal19 fatcat:ti3ud2v6l5bffegfn3gzrm2lca

Partial-Gated Memristor Crossbar for Fast and Power-Efficient Defect-Tolerant Training

Pham, Nguyen, Min
2019 Micromachines  
The proposed scheme has been verified by CADENCE circuit simulation with the real memristor's Verilog-A model.  ...  To reduce the programming time and power, the partial gating scheme is proposed here to realize the partial training, where only some part of neurons are trained, which are more responsible in the recognition  ...  The energy-efficient and fast training of memristor-based neural networks are very important in edge-computing applications [14] .  ... 
doi:10.3390/mi10040245 pmid:31013938 pmcid:PMC6523436 fatcat:btlta34dpfezne6ihzhnxfbig4

IR-QNN Framework: An IR Drop-Aware Offline Training Of Quantized Crossbar Arrays

Mohammed E. Fouda, Sugil Lee, Jongeun Lee, Gun Hwan Kim, Fadi Kurdahi, Ahmed Eltawil
2020 IEEE Access  
Resistive Crossbar Arrays present an elegant implementation solution for Deep Neural Networks acceleration.  ...  In this paper, we propose a fast and efficient training and validation framework to incorporate the wire resistance in Quantized DNNs, without the need for computationally extensive SPICE simulations during  ...  INTRODUCTION Artificial Intelligence hardware acceleration has attracted significant interest [1] , [2] especially accelerating deep neural networks (DNNs) with in-memory processing, alleviating the  ... 
doi:10.1109/access.2020.3044652 fatcat:3akhrmwkf5exhesqbxpvqfoc6i

Special Session: Reliability Analysis for ML/AI Hardware [article]

Shamik Kundu, Kanad Basu, Mehdi Sadi, Twisha Titirsha, Shihao Song, Anup Das, Ujjwal Guin
2021 arXiv   pre-print
The first section outlines the reliability issues in a commercial systolic array-based ML accelerator in the presence of faults engendering from device-level non-idealities in the DRAM.  ...  Next, we quantified the impact of circuit-level faults in the MSB and LSB logic cones of the Multiply and Accumulate (MAC) block of the AI accelerator on the AI/ML accuracy.  ...  Neuromorphic systems are energy efficient in executing Spiking Neural Networks (SNNs), which are considered as the third generation of neural networks.  ... 
arXiv:2103.12166v2 fatcat:eceha3a6ibgojeokkg5gm3zywq

2018 IndexIEEE Transactions on Very Large Scale Integration (VLSI) SystemsVol. 26

2018 IEEE Transactions on Very Large Scale Integration (vlsi) Systems  
., see 2723-2736 , VLSI Design of an ML-Based Power-Efficient Motion Estimation Controller for Intelligent Mobile Systems; TVLSI Feb. 2018 262-271 Hsieh, Y., see Tsai, Y., TVLSI May 2018 945-957  ...  Hsu, K., Chen, Y., Lee, Y., and Chang, S., Contactless Testing for Prebond Interposers; TVLSI June 2018 1005-1014 Hsu, Y., see Liu, Z., 1565-1574 Hu, J., see Wang, Y., TVLSI May 2018 805-817 Hu, J  ...  ., +, TVLSI July 2018 1254-1267 Energy-Efficient Write Scheme for Nonvolatile Resistive Crossbar Arrays With Selectors.  ... 
doi:10.1109/tvlsi.2019.2892312 fatcat:rxiz5duc6jhdzjo4ybcxdajtbq

On the Accuracy of Analog Neural Network Inference Accelerators [article]

T. Patrick Xiao, Ben Feinberg, Christopher H. Bennett, Venkatraman Prabhakar, Prashant Saxena, Vineet Agrawal, Sapan Agarwal, Matthew J. Marinella
2022 arXiv   pre-print
A promising category of accelerators utilizes nonvolatile memory arrays to both store weights and perform in situ analog computation inside the array.  ...  This ultimately results in an analog accelerator that is more accurate, more robust to analog errors, and more energy-efficient.  ...  and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S.  ... 
arXiv:2109.01262v3 fatcat:nsgyu5mqoza43lpzlh4hvhzqtq

Computational Failure Analysis of In-Memory RRAM Architecture for Pattern Classification CNN Circuits

Nagaraj Lakshmana Prabhu, Nagarajan Raghavan
2021 IEEE Access  
Filamentary non-ideal RRAM model programmed into an ML simulation architecture in a shallow analog crossbar array to study Neural Network (NN) prediction accuracy variability with MNIST handwritten text  ...  Degradation (RD) modeled Crossbar array.  ... 
doi:10.1109/access.2021.3136193 fatcat:364glzdz2javfo2k5mbqya26pa

2022 Roadmap on Neuromorphic Computing and Engineering [article]

Dennis V. Christensen, Regina Dittmann, Bernabé Linares-Barranco, Abu Sebastian, Manuel Le Gallo, Andrea Redaelli, Stefan Slesazeck, Thomas Mikolajick, Sabina Spiga, Stephan Menzel, Ilia Valov, Gianluca Milano (+47 others)
2022 arXiv   pre-print
In the Von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously.  ...  This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors.  ...  Concluding Remarks Integrating event-based vision sensing and processing with neuromorphic computation techniques is expected to yield solutions that will be able to penetrate the artificial vision market  ... 
arXiv:2105.05956v3 fatcat:pqir5infojfpvdzdwgmwdhsdi4
« Previous Showing results 1 — 15 out of 62 results