Filters








200,629 Hits in 5.1 sec

Energy Efficient Hadamard Neural Networks [article]

T. Ceren Deveci and Serdar Cakir and A. Enis Cetin
2018 arXiv   pre-print
Among all energy efficient networks, our novel ensemble model outperforms other energy efficient models.  ...  Convolutional neural networks (CNN), which is a popular deep learning architecture designed to process data in multiple array form, show great success to almost all detection \& recognition problems and  ...  Tensorflow is chosen to implement all of the deep neural networks for this work.  ... 
arXiv:1805.05421v1 fatcat:mvk77jowfjghjciem2ttqrvwr4

Review of Deep Neural Network Based on Auto-encoder

Xinlin Zhang, Yuanmeng Hu, Li Zhang, Yajing Kong, Xiang Gao, Huajing Wei
2019 DEStech Transactions on Computer Science and Engineering  
The principle of deep neural networks based on Auto-encoders is described, and the application of hybrid neural networks in various types is introduced.  ...  In this article, firstly, the origins and basic concepts of deep learning, automatic encoders, deep belief networks, and convolutional neural networks are introduced.  ...  Hence, these are what deep hybrid neural networks need to solve. B.Convolutional Neural Network Convolutional Neural Network (CNN) is one of the deep neural network models.  ... 
doi:10.12783/dtcse/iciti2018/29087 fatcat:mt7b7jy2uzadvelvseuycgekma

Intelligent Diagnosis of Rolling Bearing Fault Based on Improved Convolutional Neural Network and LightGBM

Yanwei Xu, Weiwei Cai, Liuyang Wang, Tancheng Xie, Claudio Sbarufatti
2021 Shock and Vibration  
diagnosis method of rolling bearing fault based on the improved convolution neural network and light gradient boosting machine is proposed.  ...  Finally, the verification experiment is carried out, and the experimental result shows that the average training and diagnosis time of the model is only 39.73 s and 0.09 s, respectively, and the average  ...  Acknowledgments e authors are grateful for the financial support provided by the National Natural Science Foundation of China under grant no. 51805151 and the Key Scientific Research Project of the University  ... 
doi:10.1155/2021/1205473 fatcat:qbbwiqshtrh4hcfeu2zn2y7xve

Fusion of Deep Learning Models for Improving Classification Accuracy of Remote Sensing Images

P Deepan
2019 JOURNAL OF MECHANICS OF CONTINUA AND MATHEMATICAL SCIENCES  
The intent of this paper is to study the effect of ensemble classifier constructed by combining three Deep Convolutional Neural Networks (DCNN) namely; CNN, VGG-16 and Res Inception models by using average  ...  Over the recent years we have witnessed an increasing number of applications using deep learning techniques such as Convolutional Neural networks (CNNs), Recurrent Neural Networks (RNN) and Deep Neural  ...  In deep feature learning method there are several number of learning models such as recurrent neural network (RNN), convolutional neural network (CNN), deep neural network (DNN), stacked auto encoder (  ... 
doi:10.26782/jmcms.2019.10.00015 fatcat:4caqx5u5ezgr5hehsxtvczun44

Application of Convolution Network Model Based on Deep Learning in Sports Image Information Detection

Xiaoqiao Zhang, L. Zhang, S. Defilla, W. Chu
2021 E3S Web of Conferences  
The average SSIM value of set5 is 0.865, which shows that the quality of sports image reconstruction and the reconstruction efficiency of the model can be improved by using the local image features of  ...  Aiming at the problems and shortcomings of the existing sports image information detection based on convolution neural network, this paper proposes the application of convolution network model based on  ...  DEEP LEARNING AND BATCH NORMALIZATION LAYER IN CONVOLUTIONAL NEURAL NETWORKS Overview of Deep Learning Deep learning is closely related to the development of neural network, which is a mathematical model  ... 
doi:10.1051/e3sconf/202123302024 fatcat:zuv7pa2j4ve7bi3qg35rxvtdfm

Functional Network: A Novel Framework for Interpretability of Deep Neural Networks [article]

Ben Zhang, Zhetong Dong, Junsong Zhang, Hongwei Lin
2022 arXiv   pre-print
Inspired by the success of functional brain networks, we propose a novel framework for interpretability of deep neural networks, that is, the functional network.  ...  The layered structure of deep neural networks hinders the use of numerous analysis tools and thus the development of its interpretability.  ...  These findings demonstrate that the functional network can not only provide explanations for deep neural networks but also evaluate the models in practice.  ... 
arXiv:2205.11702v1 fatcat:6epbp6z6unehhpavevyufgtami

Exploring Optimized Spiking Neural Network Architectures for Classification Tasks on Embedded Platforms

Tehreem Syed, Vijay Kakani, Xuenan Cui, Hakil Kim
2021 Sensors  
This work proposes a customized model (VGG, ResNet) architecture to train deep convolutional spiking neural networks.  ...  neural networks.  ...  The authors of [35] proposed the hybrid computationally efficient training methodology for the deep Spiking Neural Networks.  ... 
doi:10.3390/s21093240 pmid:34067080 fatcat:hsknxhxkavaqhg54lylp6hg7wy

Deep Neural Network Based Behavioral Model of Nonlinear Circuits

Zhe Jin, Sekouba Kaba
2021 Journal of Applied Mathematics and Physics  
Deep neural networks (DNNs) have been recognized as a powerful tool for nonlinear system modeling.  ...  The PA model is constructed based on a feedforward neural network with three hidden layers, and then Multisim circuit simulator is applied to generating the raw training data.  ...  Deep Feedforward Neural Networks There is no clear threshold of depth that divides shallow neural networks from deep neural networks.  ... 
doi:10.4236/jamp.2021.93028 fatcat:vfufeg3hazexzpbufoe4wq7gpy

Discovery Radiomics via Evolutionary Deep Radiomic Sequencer Discovery for Pathologically-Proven Lung Cancer Detection [article]

Mohammad Javad Shafiee, Audrey G. Chung, Farzad Khalvati, Masoom A. Haider, Alexander Wong
2017 arXiv   pre-print
Motivated by patient privacy concerns and the idea of operational artificial intelligence, the evolutionary deep radiomic sequencer discovery approach organically evolves increasingly more efficient deep  ...  We propose a novel evolutionary deep radiomic sequencer discovery approach based on evolutionary deep intelligence.  ...  As mentioned before, one of the important obstacles in using a deep neural network as the underlying architecture for a radiomic sequencer is the efficiency of the underlying deep neural network.  ... 
arXiv:1705.03572v2 fatcat:xquwhfkwfrgafo44l3pvxi7lga

AttoNets: Compact and Efficient Deep Neural Networks for the Edge via Human-Machine Collaborative Design [article]

Alexander Wong, Zhong Qiu Lin, Brendan Chwyl
2019 arXiv   pre-print
The efficacy of human-machine collaborative design is demonstrated through the creation of AttoNets, a family of highly efficient deep neural networks for on-device edge deep learning.  ...  In this study, we take a deeper exploration into a human-machine collaborative design approach for creating highly efficient deep neural networks through a synergy between principled network design prototyping  ...  Section 2 describes in detail the humanmachine collaborative design strategy for creating highly efficient deep neural networks for edge and mobile scenarios.  ... 
arXiv:1903.07209v2 fatcat:vkgguvnkgvaznlx6h3oucfnp2m

AttoNets: Compact and Efficient Deep Neural Networks for the Edge via Human-Machine Collaborative Design

Alexander Wong, Zhong Qiu Lin, Brendan Chwyl
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
The efficacy of human-machine collaborative design is demonstrated through the creation of AttoNets, a family of highly efficient deep neural networks for on-device edge deep learning.  ...  In this study, we take a deeper exploration into a human-machine collaborative design approach for creating highly efficient deep neural networks through a synergy between principled network design prototyping  ...  Section 2 describes in detail the human-machine collaborative design strategy for creating highly efficient deep neural networks for edge and mobile scenarios.  ... 
doi:10.1109/cvprw.2019.00095 dblp:conf/cvpr/WongLC19 fatcat:flx766dasrgsrceh6m6qibswcu

CNNLab: a Novel Parallel Framework for Neural Networks using GPU and FPGA-a Practical Study with Trade-off Analysis [article]

Maohua Zhu, Liu Liu, Chao Wang, Yuan Xie
2016 arXiv   pre-print
Designing and implementing efficient, provably correct parallel neural network processing is challenging.  ...  However, the diversity and large-scale data size have posed a significant challenge to construct a flexible and high-performance implementation of deep learning neural networks.  ...  Therefore, it poses significant challenges to implementing high-performance deep learning networks with low power cost, especially for large-scale deep learning neural network models.  ... 
arXiv:1606.06234v1 fatcat:en7acoahonb7beqrnxv553g46e

TinySpeech: Attention Condensers for Deep Speech Recognition Neural Networks on Edge Devices [article]

Alexander Wong, Mahmoud Famouri, Maya Pavlova, Siddharth Surana
2020 arXiv   pre-print
In this study, we introduce the concept of attention condensers for building low-footprint, highly-efficient deep neural networks for on-device speech recognition on the edge.  ...  These results not only demonstrate the efficacy of attention condensers for building highly efficient networks for on-device speech recognition, but also illuminate its potential for accelerating deep  ...  highly-efficient yet high-performance deep neural networks for speech recognition on edge devices.  ... 
arXiv:2008.04245v6 fatcat:4iajmayck5fhzdjikb44yfnscu

Neural Network Activation Quantization with Bitwise Information Bottlenecks [article]

Xichuan Zhou, Kui Liu, Cong Shi, Haijun Liu, Ji Liu
2020 arXiv   pre-print
Meanwhile, by reducing the code rate, the proposed method can improve the memory and computational efficiency by over six times compared with the deep neural network with standard single-precision representation  ...  Inspired by the problem of lossy signal compression for wireless communication, this paper presents a Bitwise Information Bottleneck approach for quantizing and encoding neural network activations.  ...  efficiency as well as the computational efficiency of the neural-network inference by over 6 times.  ... 
arXiv:2006.05210v1 fatcat:ncl4xrd7evgprdhx3peqzf5zrm

Analysis of Time Series Prediction using Recurrent Neural Networks

Gaurav Yadav, Richa Vasuja
2019 International Journal of Computer Applications  
Time series prediction is the heart of forecasting data that is based on past information of any particular dataset, recurrent neural network combines with the time series algorithm and provide much reliable  ...  Based on the research this paper contains analytical data of recurrent neural network and its use with time series alongside the experimental data analysis of weather forecast and financial forecast data  ...  Auto Regressive Integrated Moving Average (ARIMA) [9] with dynamic regression layers in Keras is also applied for concluding the comparison of RNN to ARIMA.  ... 
doi:10.5120/ijca2019918732 fatcat:u2ighcayznegbcjty42d4j7mle
« Previous Showing results 1 — 15 out of 200,629 results