Filters








23 Hits in 6.9 sec

ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars

Ali Shafiee, Anirban Nag, Naveen Muralimanohar, Rajeev Balasubramonian, John Paul Strachan, Miao Hu, R. Stanley Williams, Vivek Srikumar
2016 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA)  
In particular, our work makes the following contributions: (i) We design a pipelined architecture, with some crossbars dedicated for each neural network layer, and eDRAM buffers that aggregate data between  ...  A number of recent efforts have attempted to design accelerators for popular machine learning algorithms, such as those involving convolutional and deep neural networks (CNNs and DNNs).  ...  ., convolutional neural networks (CNNs) and the more general deep neural networks (DNNs), can therefore have high impact.  ... 
doi:10.1109/isca.2016.12 dblp:conf/isca/ShafieeNMBSHWS16 fatcat:xgl6b5pkxvh3dja2qpuj6jadha

ISAAC

Ali Shafiee, Anirban Nag, Naveen Muralimanohar, Rajeev Balasubramonian, John Paul Strachan, Miao Hu, R. Stanley Williams, Vivek Srikumar
2016 SIGARCH Computer Architecture News  
In particular, our work makes the following contributions: (i) We design a pipelined architecture, with some crossbars dedicated for each neural network layer, and eDRAM buffers that aggregate data between  ...  A number of recent efforts have attempted to design accelerators for popular machine learning algorithms, such as those involving convolutional and deep neural networks (CNNs and DNNs).  ...  ., convolutional neural networks (CNNs) and the more general deep neural networks (DNNs), can therefore have high impact.  ... 
doi:10.1145/3007787.3001139 fatcat:guovr3fxe5hpnawo2zacozwvry

A Survey of Near-Data Processing Architectures for Neural Networks [article]

Mehdi Hassanpour, Marc Riera, Antonio González
2021 arXiv   pre-print
, and especially neural network (NN)-based accelerators has grown significantly.  ...  In this paper, we present a survey of techniques for designing NDP architectures for NN.  ...  Srikumar, “Isaac: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars,” in 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (  ... 
arXiv:2112.12630v1 fatcat:drkwrztkazd3hlblxc7i4kgn2a

A Survey of Near-Data Processing Architectures for Neural Networks

Mehdi Hassanpour, Marc Riera, Antonio González
2022 Machine Learning and Knowledge Extraction  
, and especially neural network (NN)-based accelerators has grown significantly.  ...  In this paper, we present a survey of techniques for designing NDP architectures for NN.  ...  . [38] , an accelerator for CNN inference with in situ analog arithmetic in ReRAM crossbars.  ... 
doi:10.3390/make4010004 fatcat:5frcwe57drgihbgygiecoqqnvy

On the Reliability of Computing-in-Memory Accelerators for Deep Neural Networks [article]

Zheyu Yan, Xiaobo Sharon Hu, Yiyu Shi
2022 arXiv   pre-print
Computing-in-memory with emerging non-volatile memory (nvCiM) is shown to be a promising candidate for accelerating deep neural networks (DNNs) with high energy efficiency.  ...  In this chapter, we first offer a brief introduction to the opportunities and challenges of nvCiM DNN accelerators and then show the properties of different types of NVM devices.  ...  The first design, In-Situ Analog Arithmetic in Crossbars (ISAAC) [39] uses crossbar arrays for both DNN weight storage and processing elements for VMM operations [54] .  ... 
arXiv:2205.13018v1 fatcat:r7nbvcnnqfds7ok3cc7g6k3cfi

Artificial neural networks based on memristive devices

Vignesh Ravichandran, Can Li, Ali Banagozar, J. Joshua Yang, Qiangfei Xia
2018 Science China Information Sciences  
A general description of neural networks is presented, followed by a survey of prominent CMOS networks, and finally networks implemented using emerging memristive devices are discussed, along with the  ...  This article provides an overview of various neural networks with an emphasis on networks based on memristive emerging devices, with the advantages of memristor neural networks compared with pure complementary  ...  [13] developed in-situ analog arithmetic in crossbars (ISAAC), a CNN accelerator architecture capable of holding synaptic weights and performing dot-product operations in the same memristor crossbar  ... 
doi:10.1007/s11432-018-9425-1 fatcat:5sxcrcshtrcj3c6g4vd7uhov4a

In‐Memory Vector‐Matrix Multiplication in Monolithic Complementary Metal–Oxide–Semiconductor‐Memristor Integrated Circuits: Design Choices, Challenges, and Perspectives

Amirali Amirsoleimani, Fabien Alibart, Victor Yon, Jianxiong Xu, M. Reza Pazhouhandeh, Serge Ecoffey, Yann Beilliard, Roman Genov, Dominique Drouin
2020 Advanced Intelligent Systems  
In this context, resistive switching (RS) memory devices is a key promising choice, due to their unique intrinsic device-level properties enabling both storing and computing with a small, massively-parallel  ...  We present a qualitative and quantitative analysis of several key existing challenges in implementing high-capacity, high-volume RS memories for accelerating the most computationally demanding computation  ...  One of the notable RS-based systems is ISAAC, which is a convolutional neural network accelerator 97 .  ... 
doi:10.1002/aisy.202000115 fatcat:jbumwsdpwze33paahq5qul2cja

Recent progress in analog memory-based accelerators for deep learning

Hsinyu Tsai, Stefano Ambrogio, Pritish Narayanan, Robert M Shelby, Geoffrey W Burr
2018 Journal of Physics D: Applied Physics  
We survey the extensive but rapidly developing literature on what would be needed from an analog memory device to enable such a DNN accelerator, and summarize progress with various analog memory candidates  ...  After surveying how recent circuits and systems work, we conclude with a description of the next research steps that will be needed in order to move closer to the commercialization of viable analog-memory-based  ...  A family of DNN networks offering many opportunities for such data re-use are convolutional neural networks (CONV-net) [24] .  ... 
doi:10.1088/1361-6463/aac8a5 fatcat:2xxoiiv3a5fj5hxjoamsndl66a

A Simulation Framework for Memristor-Based Heterogeneous Computing Architectures

Haikun Liu, Jiahong Xu, Xiaofei Liao, Hai Jin, Yu Zhang, Fubing Mao
2022 IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems  
Memristor-based accelerator (MBA) has demonstrated its capability in accelerating matrix-vector multiplication (MVM) with high performance and energy efficiency.  ...  In this paper, we propose a simulation framework called MHSim to evaluate the energy efficiency and performance of applications running with both MBAs and CPUs.  ...  PUMAsim assumes the weight in neural network inference accelerators had been mapped into XB arrays before the in-situ computation, without considering the cost of datamapping.  ... 
doi:10.1109/tcad.2022.3152385 fatcat:7xt6yvrnavaezf7b3okolfcnke

Improving DNN Fault Tolerance using Weight Pruning and Differential Crossbar Mapping for ReRAM-based Edge AI [article]

Geng Yuan, Zhiheng Liao, Xiaolong Ma, Yuxuan Cai, Zhenglun Kong, Xuan Shen, Jingyan Fu, Zhengang Li, Chengming Zhang, Hongwu Peng, Ning Liu, Ao Ren (+2 others)
2021 arXiv   pre-print
intensive and key computation in deep neural networks (DNNs).  ...  Recent research demonstrated the promise of using resistive random access memory (ReRAM) as an emerging technology to perform inherently parallel analog domain in-situ matrix-vector multiplication -- the  ...  Accordingly, it is especially suitable for edge devices in IoT. ReRAM cells can be used to form a crossbar structure to conduct in-situ dot products, such as the convolution computations in DNNs.  ... 
arXiv:2106.09166v2 fatcat:eysjye7lcrdmbb4ll6kg46osvi

Harnessing Intrinsic Noise in Memristor Hopfield Neural Networks for Combinatorial Optimization [article]

Fuxi Cai, Suhas Kumar, Thomas Van Vaerenbergh, Rui Liu, Can Li, Shimeng Yu, Qiangfei Xia, J. Joshua Yang, Raymond Beausoleil, Wei Lu, John Paul Strachan
2019 arXiv   pre-print
Here we describe a memristor-Hopfield Neural Network (mem-HNN) with massively parallel operations performed in a dense crossbar array.  ...  We provide experimental demonstrations solving NP-hard max-cut problems directly in analog crossbar arrays, and supplement this with experimentally-grounded simulations to explore scalability with problem  ...  Acknowledgements We are grateful to Salvatore Mandra for performing the CPU simulations used in Table 1  ... 
arXiv:1903.11194v2 fatcat:5idqg4lzdrbihn6e6hippws6am

An Overview of Efficient Interconnection Networks for Deep Neural Network Accelerators

Seyed Morteza Nabavinejad, Mohammad Baharloo, Kun-Chih Chen, Maurizio Palesi, Tim Kogel, Masoumeh Ebrahimi
2020 IEEE Journal on Emerging and Selected Topics in Circuits and Systems  
Deep Neural Networks (DNNs) have shown significant advantages in many domains, such as pattern recognition, prediction, and control optimization.  ...  ., in/near-memory processing) for the DNN accelerator design. This paper systematically investigates the interconnection networks in modern DNN accelerator designs.  ...  ISAAC [50] explores in-situ processing, leveraging memristor crossbar arrays for speeding up analog execution of dot-product operations in inference phase of NN accelerators.  ... 
doi:10.1109/jetcas.2020.3022920 fatcat:idqitgwnrnegbd4dhrly3xsxbi

ATRIA: A Bit-Parallel Stochastic Arithmetic Based Accelerator for In-DRAM CNN Processing [article]

Supreeth Mysore Shivanandamurthy, Ishan. G. Thakkar, Sayed Ahmad Salehi
2021 arXiv   pre-print
With the rapidly growing use of Convolutional Neural Networks (CNNs) in real-world applications related to machine learning and Artificial Intelligence (AI), several hardware accelerator designs for CNN  ...  In this paper, we present ATRIA, a novel bit-pArallel sTochastic aRithmetic based In-DRAM Accelerator for energy-efficient and high-speed inference of CNNs.  ...  We mapped four benchmark CNNs on ATRIA to compare its performance with five state-of-the-art in-DRAM accelerators from prior work.  ... 
arXiv:2105.12781v1 fatcat:yhs33rcmejf6tesa333tih6jzy

MSPAN: A Memristive Spike-Based Computing Engine With Adaptive Neuron for Edge Arrhythmia Detection

Jingwen Jiang, Fengshi Tian, Jinhao Liang, Ziyang Shen, Yirui Liu, Jiapei Zheng, Hui Wu, Zhiyuan Zhang, Chaoming Fang, Yifan Zhao, Jiahe Shi, Xiaoyong Xue (+1 others)
2021 Frontiers in Neuroscience  
A multi-layer deep integrative spiking neural network (DiSNN) is first designed with an accuracy of 93.6% in 4-class ECG classification tasks.  ...  In this work, a memristive spike-based computing in memory (CIM) system with adaptive neuron (MSPAN) is proposed to realize energy-efficient remote arrhythmia detection with high accuracy in edge devices  ...  “ISAAC: a convolutional neural network accelerator with in-situ analog arithmetic in crossbars,” in Proceedings of the 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA)  ... 
doi:10.3389/fnins.2021.761127 pmid:34975373 pmcid:PMC8715923 fatcat:tkgcnfj355ffbdihyvqmwtahzm

In-Memory Data Parallel Processor

Daichi Fujiki, Scott Mahlke, Reetuparna Das
2018 Proceedings of the Twenty-Third International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS '18  
Recent developments in Non-Volatile Memories (NVMs) have opened up a new horizon for in-memory computing.  ...  To facilitate in-memory programming, we develop a compilation framework that takes a TensorFlow input and generates code for our inmemory processor.  ...  This work was supported in part by the NSF under the CAREER-1652294 award and the XPS-1628991 award, and by C-FAR, one of the six SRC STAR-net centers sponsored by MARCO and DARPA.  ... 
doi:10.1145/3173162.3173171 dblp:conf/asplos/FujikiMD18 fatcat:vxzdd2jdqnbrdnlq4ypdsedkzm
« Previous Showing results 1 — 15 out of 23 results