A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Structured Convolutions for Efficient Neural Network Design
[article]
2020
arXiv
pre-print
In this work, we tackle model efficiency by exploiting redundancy in the implicit structure of the building blocks of convolutional neural networks. ...
Furthermore, we present a Structural Regularization loss that promotes neural network layers to leverage on this desired structure in a way that, after training, they can be decomposed with negligible ...
Acknowledgements We would like to thank our Qualcomm AI Research colleagues for their support and assistance, in particular that of Andrey Kuzmin, Tianyu Jiang, Khoi Nguyen, Kwanghoon An and Saurabh Pitre ...
arXiv:2008.02454v2
fatcat:3l744vyejzbj5o42q6yjq3gd4e
Seesaw-Net: Convolution Neural Network With Uneven Group Convolution
[article]
2019
arXiv
pre-print
In this paper, we are interested in boosting the representation capability of convolution neural networks which utilizing the inverted residual structure. ...
Based on the success of Inverted Residual structure[Sandler et al. 2018] and Interleaved Low-Rank Group Convolutions[Sun et al. 2018], we rethink this two pattern of neural network structure, rather than ...
Acknowledgments We would like to thank Ke Sun and Mingjie Li for their source code as well as the helpful feedback. ...
arXiv:1905.03672v5
fatcat:ccdh7h466jaaznkmnwmonhgwg4
Design of Efficient Convolutional Neural Module Based on An Improved Module
2020
Advances in Science, Technology and Engineering Systems
And we apply this module to the design of high performance small neural networks. The experiments are carried out on 101_food and caltech-256 benchmark datasets. ...
pooling layers can cause changes in the performance of neural networks. ...
At the same time, some design inspirations of convolutional neural networks are obtained, which will provide inspiration for the design work of convolutional neural networks in the future. ...
doi:10.25046/aj050143
fatcat:6nftp2lr45e27m7pg5qp4gop6y
TinySpeech: Attention Condensers for Deep Speech Recognition Neural Networks on Edge Devices
[article]
2020
arXiv
pre-print
In this study, we introduce the concept of attention condensers for building low-footprint, highly-efficient deep neural networks for on-device speech recognition on the edge. ...
To illustrate its efficacy, we introduce TinySpeech, low-precision deep neural networks comprising largely of attention condensers tailored for on-device speech recognition using a machine-driven design ...
However, there are complexity barriers that limit how efficient deep neural networks based on existing deep convolutional neural network design patterns can achieve, and as such exploring alternative design ...
arXiv:2008.04245v6
fatcat:4iajmayck5fhzdjikb44yfnscu
A Convolutional Neural Network Algorithm for the Optimization of Emergency Nursing Rescue Efficiency for Critical Patients
2021
Journal of Healthcare Engineering
In order to help pathologists quickly locate the lesion area, improve the diagnostic efficiency, and reduce missed diagnosis, a convolutional neural network algorithm for the optimization of emergency ...
neural network algorithm can greatly improve the efficiency of emergency nursing. ...
Finally, output through the convolution layer of 7 × 7 convolution kernel. After designing the network structure, the classifier should be designed for the patient posture behavior algorithm. ...
doi:10.1155/2021/1034972
pmid:34659675
pmcid:PMC8514904
fatcat:37smyhgh2feb5ohkdqkgcd44d4
Binarizing MobileNet via Evolution-based Searching
[article]
2020
arXiv
pre-print
Inspired by one-shot architecture search frameworks, we manipulate the idea of group convolution to design efficient 1-Bit Convolutional Neural Networks (CNNs), assuming an approximately optimal trade-off ...
Designing efficient binary architectures is not trivial due to the binary nature of the network. ...
We thanks all anonymous reviewers for constructive and valuable feedback. The code will be available at link ...
arXiv:2005.06305v2
fatcat:wqpnfyedt5ey7fztndu5atdhfq
AttendNets: Tiny Deep Image Recognition Neural Networks for the Edge via Visual Attention Condensers
[article]
2020
arXiv
pre-print
to several deep neural networks in research literature designed for efficiency while achieving highest accuracies (with the smallest AttendNet achieving ∼7.2 operations, ∼4.17× fewer parameters, and ∼ ...
In this study, we introduce AttendNets, low-precision, highly compact deep neural networks tailored for on-device image recognition. ...
compared to previously proposed efficient deep neural networks designed for on-device image recognition. ...
arXiv:2009.14385v1
fatcat:ur7ix4qzmfbzxjft5gnk7ukrzy
Unadorned Gabor based Convolutional Neural Network Overrides Transfer Learning Concept
2021
International Journal of Applied Engineering Research
The efficiency of Convolutional Neural Networks (CNN) is highly influenced by the size of dataset. To train CNN systems from the scratch, dataset of very large size is essential. ...
This work would shed new light in deep Learning research where researchers are forced to focus on and build highly complex CNN network structures. ...
Fig. 3 shows Basic structure of Unadorned Gabor Based Convolution Neural Network. ...
doi:10.37622/ijaer/13.13.2018.11012-11017
fatcat:gjmmw5hihjaw7maqo4rpt337gy
StochasticNet in StochasticNet
2016
Journal of Computational Vision and Imaging Systems
Deep neural networks have been shown to outperform conventionalstate-of-the-art approaches in several structured predictionapplications. ...
The experimental results showthat SiS can form deep neural networks with NiN architectures thathave 4X greater architectural efficiency with only a 2% dropin accuracy for the CIFAR10 dataset. ...
The authors also thank Nvidia for the GPU hardware used in this study through the Nvidia Hardware Grant Program. ...
doi:10.15353/vsnl.v2i1.106
fatcat:cb42cy645ne47epomeobyo7h6i
Editorial: Special Issue on Compact Deep Neural Networks With Industrial Applications
2020
IEEE Journal on Selected Topics in Signal Processing
In "Structured Pruning for Efficient Convolutional Neural Networks via Incremental Regularization", Wang et al. propose a novel regularization-based pruning method, named IncReg, to incrementally assign ...
"Accelerating Convolutional Neural Network via Structured Gaussian Scale Mixture Models: a Joint Grouping and Pruning Approach" by Huang et al. proposes a hybrid network compression technique for exploiting ...
doi:10.1109/jstsp.2020.3006323
fatcat:d75ni7ocajb4pemovq2l3ton4i
FPGA Accelerating Core Design Based on XNOR Neural Network Algorithm
2018
MATEC Web of Conferences
Therefore, this article studies single-bit parameterized quantized neural network algorithm (XNOR), and optimizes the neural network algorithm based on the structural characteristics of the FPGA platform ...
. , Design and implementation of the FPGA acceleration core, the experimental results show that the acceleration effect is obvious. ...
If an XNOR network is used for accelerator design, inputting one pixel per shot will make the system no longer efficient. ...
doi:10.1051/matecconf/201817301024
fatcat:ald544rjdjhhdnrdws3u3cfs6q
Recent Advances in Convolutional Neural Network Acceleration
[article]
2018
arXiv
pre-print
In recent years, convolutional neural networks (CNNs) have shown great performance in various fields such as image classification, pattern recognition, and multi-media compression. ...
~structure level, algorithm level, and implementation level, for acceleration methods. ...
Convolutional Neural Network The modern convolutional neural networks proposed by LeCun [22] is a 7-layer (excluding the input layer) LeNet-5 structure. ...
arXiv:1807.08596v1
fatcat:jx66ekaofjhqzdbaueal476bvi
Research on a handwritten character recognition algorithm based on an extended nonlinear kernel residual network
2018
KSII Transactions on Internet and Information Systems
For this reason, this paper took the training time and recognition accuracy into consideration and proposed a novel handwritten character recognition algorithm with newly designed network structure, which ...
nonlinear kernel residual network apriori algorithm for intra-class clustering, making the subsequent network training more pertinent; (2) presentation of an intermediate convolution model with a pre-processed ...
convolution neural network and a residual network structure using a residual kernel structure for the classification experiments. ...
doi:10.3837/tiis.2018.01.020
fatcat:wvre6qss7fftrmsuyrqg2apeyq
Computational optimization of convolutional neural networks using separated filters architecture
[article]
2020
arXiv
pre-print
This paper considers a convolutional neural network transformation that reduces computation complexity and thus speedups neural network processing. ...
Usage of convolutional neural networks (CNN) is the standard approach to image recognition despite the fact they can be too computationally demanding, for example for recognition on mobile platforms or ...
Acknowledgments This work is supported by Russian Foundation for Basic Research (projects 15-29-06083 and 16-07-01167). ...
arXiv:2002.07754v1
fatcat:xazy3sx5xfdm5bndox4lql3cca
Image Compression Based on Deep Learning: A Review
2021
Asian Journal of Research in Computer Science
Many neural networks are required for image compressions, such as deep neural networks, artificial neural networks, recurrent neural networks, and convolution neural networks. ...
Image compression is an essential technology for encoding and improving various forms of images in the digital era. ...
Convolutional Neural Network (CNN) Convolution neural networks are primarily used for image compression and classification but have proven successful for a variety of tasks, such as speech recognition ...
doi:10.9734/ajrcos/2021/v8i130193
fatcat:2fe4mfuvbffwphwtnxmay74oem
« Previous
Showing results 1 — 15 out of 97,919 results