Filters








2,448 Hits in 7.2 sec

Toward Scalable, Efficient, and Accurate Deep Spiking Neural Networks With Backward Residual Connections, Stochastic Softmax, and Hybridization

Priyadarshini Panda, Sai Aparna Aketi, Kaushik Roy
2020 Frontiers in Neuroscience  
Spiking Neural Networks (SNNs) may offer an energy-efficient alternative for implementing deep learning applications.  ...  Note, artificial counterparts refer to conventional deep learning/artificial neural networks.  ...  FUNDING This work was supported in part by C-BRIC, Center for Brain Inspired Computing, a JUMP center sponsored by DARPA and SRC, by the Semiconductor Research Corporation, the National Science Foundation  ... 
doi:10.3389/fnins.2020.00653 pmid:32694977 pmcid:PMC7339963 fatcat:m43lki5vtfg7pk2ywnqock3pna

Towards Scalable, Efficient and Accurate Deep Spiking Neural Networks with Backward Residual Connections, Stochastic Softmax and Hybridization [article]

Priyadarshini Panda, Aparna Aketi, Kaushik Roy
2019 arXiv   pre-print
Spiking Neural Networks (SNNs) may offer an energy-efficient alternative for implementing deep learning applications.  ...  Note, artificial counterparts refer to conventional deep learning/artificial neural networks.  ...  ACKNOWLEDGMENT This work was supported in part by C-BRIC, Center for Brain-inspired Computing, a JUMP center sponsored by DARPA and SRC, by the Semiconductor Research Corporation, the National Science  ... 
arXiv:1910.13931v1 fatcat:yuuv4ynjpze5hisw6dsntqo3ei

Editorial: Understanding and Bridging the Gap Between Neuromorphic Computing and Machine Learning

Lei Deng, Huajin Tang, Kaushik Roy
2021 Frontiers in Computational Neuroscience  
AUTHOR CONTRIBUTIONS All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.  ...  with low execution latency and low power consumption.  ...  machine learning such as artificial neural networks (ANNs).  ... 
doi:10.3389/fncom.2021.665662 pmid:33815083 pmcid:PMC8010134 fatcat:l5frrkuzprbovpb4tf327mhmtq

Accelerating Spike-by-Spike Neural Networks on FPGA with Hybrid Custom Floating-Point and Logarithmic Dot-Product Approximation

Yarib Nevarez, David Rotermund, Klaus R. Pawelzik, Alberto Garcia-Ortiz
2021 IEEE Access  
However, deep SbS networks require a memory footprint and a computational cost unsuitable for embedded applications.  ...  This approach reduces computational latency, memory footprint, and power dissipation while preserving inference accuracy.  ...  ACKNOWLEDGMENTS This work is funded by the Consejo Nacional de Ciencia y Tecnologia -CONACYT (the Mexican National Council for Science and Technology).  ... 
doi:10.1109/access.2021.3085216 fatcat:dxvv2cvc5zdv5hxhwe2wew2wsi

A Review of Heuristic Global Optimization Based Artificial Neural Network Training Approahes

D. Geraldine Bessie Amali, Dinakaran M.
2017 IAES International Journal of Artificial Intelligence (IJ-AI)  
This paper reviews the various heuristic global optimization algorithms used for training feedforward neural networks and recurrent neural networks.  ...  The training algorithms are compared in terms of the learning rate, convergence speed and accuracy of the output produced by the neural network.  ...  convergence Particle Swarm Optimization Fewer number of computation required to learn Slower convergence Hybrid Artificial Bee Colony Accuracy of results Not practicable for high dimensional  ... 
doi:10.11591/ijai.v6.i1.pp26-32 fatcat:mkalw6ikzbh45fteocs56nuz4i

Rethinking Pareto Frontier for Performance Evaluation of Deep Neural Networks [article]

Vahid Partovi Nia, Alireza Ghaffari, Mahdi Zolnouri, Yvon Savaria
2022 arXiv   pre-print
We propose to use a multi-dimensional Pareto frontier to re-define the efficiency measure of candidate deep learning models, where several variables such as training cost, inference latency, and accuracy  ...  Furthermore, a random version of the multi-dimensional Pareto frontier is introduced to mitigate the uncertainty of accuracy, latency, and throughput of deep learning models in different experimental setups  ...  Furthermore, Different designs and frameworks are developed to run neural networks efficiently on CPU (Courville & Partovi Nia, 2019) .  ... 
arXiv:2202.09275v4 fatcat:nea2a7q3ivaqhop7ulmwym3dka

A recipe for creating ideal hybrid memristive-CMOS neuromorphic computing systems [article]

Elisabetta Chicca, Giacomo Indiveri
2019 arXiv   pre-print
The development of memristive device technologies has reached a level of maturity to enable the design of complex and large-scale hybrid memristive-CMOS neural processing systems.  ...  innovative solutions for always-on edge-computing and Internet-of-Things (IoT) applications.  ...  artificial neural networks 27, 28 .  ... 
arXiv:1912.05637v1 fatcat:xhk7feo4ozbdpkvtxxjffyj6mm

Integrated human-machine intelligence for EV charging prediction in 5G smart grid

Dedong Sun, Qinghai Ou, Xianjiong Yao, Songji Gao, Zhiqiang Wang, Wenjie Ma, Wenjing Li
2020 EURASIP Journal on Wireless Communications and Networking  
Considering the inherently high mobility and low reliability of EVs, it is a great challenge for the smart grid to provide on-demand services for EVs.  ...  Therefore, we propose a novel smart grid architecture based on network slicing and edge computing technologies for the 5G smart grid.  ...  Funding This work is supported by State Grid Science and Technology project "Analysis of Power Wireless Private Network Evolution and 4G/5G Technology Application" (Grant No. 5700-201941235A-0-0-00).  ... 
doi:10.1186/s13638-020-01752-y fatcat:zliz4h24oneqnatnuj3f3hoyue

Hardware-Efficient Stochastic Binary CNN Architectures for Near-Sensor Computing

Vivek Parmar, Bogdan Penkovsky, Damien Querlioz, Manan Suri
2022 Frontiers in Neuroscience  
Stochastic computing can allow conversion of such high-precision computations to a sequence of binarized operations while maintaining equivalent accuracy.  ...  With recent advances in the field of artificial intelligence (AI) such as binarized neural networks (BNNs), a wide variety of vision applications with energy-optimized implementations have become possible  ...  -M., and Querlioz, D. (2019). Stochastic computing for hardware implementation of binarized neural networks.  ... 
doi:10.3389/fnins.2021.781786 pmid:35069101 pmcid:PMC8766965 fatcat:7xpaoytyg5ftpa4ng3jau3dkyi

Exploring the Connection Between Binary and Spiking Neural Networks [article]

Sen Lu, Abhronil Sengupta
2020 arXiv   pre-print
run-time optimization techniques for reducing inference latency of spiking networks (both for binary and full-precision models) by an order of magnitude over prior work.  ...  An important implication of this work is that Binary Spiking Neural Networks can be enabled by "In-Memory" hardware accelerators catered for Binary Neural Networks without suffering any accuracy degradation  ...  Further, we explore several design-time and run-time optimizations and perform extensive empirical analysis to demonstrate high-accuracy and low-latency SNNs through ANN-SNN conversion techniques.  ... 
arXiv:2002.10064v3 fatcat:ehys44xhrzgxrhekjic5otwnly

A novel time efficient learning-based approach for smart intrusion detection system

Sugandh Seth, Gurvinder Singh, Kuljit Kaur Chahal
2021 Journal of Big Data  
The proposed model with hybrid feature selection and LightGBM gives 97.73% accuracy, 96% sensitivity, 99.3% precision rate, and comparatively low prediction latency.  ...  However, existing Intrusion Detection Systems have been developed using outdated attack datasets, with more focus on prediction accuracy and less on prediction latency.  ...  In addition to having a high accuracy rate, the proposed model offers low prediction latency.  ... 
doi:10.1186/s40537-021-00498-8 fatcat:n7haycf4izfnlhf3zakiija52u

Nonvolatile Memories in Spiking Neural Network Architectures: Current and Emerging Trends

M. Lakshmi Varshika, Federico Corradi, Anup Das
2022 Electronics  
Current trends in neuromorphic technologies address the challenges of investigating novel materials, systems, and architectures for enabling high-integration and extreme low-power brain-inspired computing  ...  Neuromorphic systems mimic biological functions by employing spiking neural networks for achieving brain-like efficiency, speed, adaptability, and intelligence.  ...  solves benchmark temporal tasks such as ElectroCardioGram (ECG) audio classification with high accuracy and low energy.  ... 
doi:10.3390/electronics11101610 fatcat:x4aqw2xk55g5tmdfqvygyxh5eu

Efficient Machine Learning, Compilers, and Optimizations for Embedded Systems [article]

Xiaofan Zhang, Yao Chen, Cong Hao, Sitao Huang, Yuhong Li, Deming Chen
2022 arXiv   pre-print
Deep Neural Networks (DNNs) have achieved great success in a massive number of artificial intelligence (AI) applications by delivering high-quality computer vision, natural language processing, and virtual  ...  Challenges also come from the diverse application-specific requirements, including real-time responses, high-throughput performance, and reliable inference accuracy.  ...  The ELB-NN ELB-NN (Extremely Low Bit-width Neural Network) is proposed to enhance energy efficiency when running image classification on an embedded FPGA.  ... 
arXiv:2206.03326v1 fatcat:th66tbqxibez7hmctl2ytdiroa

Guest Editors' Introduction: Stochastic Computing for Neuromorphic Applications

Ilia Polian, John P. Hayes, Vincent T. Lee, Weikang Qian
2021 IEEE design & test  
The article shows how SC enables low-cost, low-power, and errortolerant hardware implementation of neural networks suitable for edge computing.  ...  We can distinguish two main trends: specialized neural network (NN) processors with size and throughput  ...  Acknowledgments We are thankful to Florian Neugebauer of the University of Stuttgart for input on state of the art in the "Further SC NN designs" section and for help with picture material. The  ... 
doi:10.1109/mdat.2021.3080989 fatcat:tycdcvndf5cndfmlw5qiqiwe5u

Exploring the Connection Between Binary and Spiking Neural Networks

Sen Lu, Abhronil Sengupta
2020 Frontiers in Neuroscience  
run-time optimization techniques for reducing inference latency of spiking networks (both for binary and full-precision models) by an order of magnitude over prior work.  ...  An important implication of this work is that Binary Spiking Neural Networks can be enabled by "In-Memory" hardware accelerators catered for Binary Neural Networks without suffering any accuracy degradation  ...  This work is aimed at performing an extensive empirical analysis to substantiate the feasibility of achieving high-accuracy and low-latency B-SNNs.  ... 
doi:10.3389/fnins.2020.00535 pmid:32670002 pmcid:PMC7327094 fatcat:pudqb3bksvc5zkdh6rpp5ytk7m
« Previous Showing results 1 — 15 out of 2,448 results