Filters








808 Hits in 2.5 sec

Compact Deep Convolutional Neural Networks With Coarse Pruning [article]

Sajid Anwar, Wonyong Sung
2016 arXiv   pre-print
We propose feature map and kernel level pruning for reducing the computational complexity of a deep convolutional neural network.  ...  The learning capability of a neural network improves with increasing depth at higher computational costs.  ...  These results are in conformity to the resiliency analysis of fixed point deep neural networks Sung et al..  ... 
arXiv:1610.09639v1 fatcat:5uh2zqanuzhmjfprhydok7mlem

Coarse and fine-grained automatic cropping deep convolutional neural network [article]

Jingfei Chang
2020 arXiv   pre-print
The existing convolutional neural network pruning algorithms can be divided into two categories: coarse-grained clipping and fine-grained clipping.  ...  This paper proposes a coarse and fine-grained automatic pruning algorithm, which can achieve more efficient and accurate compression acceleration for convolutional neural networks.  ...  This article combines coarse-grained and fine-grained pruning algorithms to propose a coarse and fine-grained automatic pruning algorithm for convolutional neural networks.  ... 
arXiv:2010.06379v2 fatcat:b4rgyq3kpzbipngjmzbpuqkxni

2020 Index IEEE Journal of Selected Topics in Signal Processing Vol. 14

2020 IEEE Journal on Selected Topics in Signal Processing  
., +, JSTSP Oct. 2020 1235-1243 Editorial: Special Issue on Compact Deep Neural Networks With Industrial Applications.  ...  Monga, V., +, JSTSP Oct. 2020 1068-1071 Editorial: Special Issue on Compact Deep Neural Networks With Industrial Applications.  ... 
doi:10.1109/jstsp.2020.3029672 fatcat:6twwzcqpwzg4ddcu2et75po77u

Compression of Deep Neural Networks for Image Instance Retrieval [article]

Vijay Chandrasekhar, Jie Lin, Qianli Liao, Olivier Morère, Antoine Veillard, Lingyu Duan, Tomaso Poggio
2017 arXiv   pre-print
Convolutional Neural Network (CNN) based descriptors are becoming the dominant approach for generating global image descriptors for the instance retrieval problem.  ...  One major drawback of CNN-based global descriptors is that uncompressed deep neural network models require hundreds of megabytes of storage making them inconvenient to deploy in mobile applications or  ...  Also, neural networks are getting deeper and deeper, as performance gains are obtained with increasing amounts of training data and increasing the number of layers: e.g., deep residual networks can be  ... 
arXiv:1701.04923v1 fatcat:muxmtist6zesrfjubzfm4n6upe

NestedNet: Learning Nested Sparse Structures in Deep Neural Networks

Eunwoo Kim, Chanho Ahn, Songhwai Oh
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
In this work, we propose a novel deep learning framework, called a nested sparse network, which exploits an n-in-1-type nested structure in a neural network.  ...  Recently, there have been increasing demands to construct compact deep architectures to remove unnecessary redundancy and to improve the inference speed.  ...  [15] proposed a structured deep network that can enable model parallelization and a more compact model compared with previous hierarchical deep networks.  ... 
doi:10.1109/cvpr.2018.00904 dblp:conf/cvpr/KimAO18 fatcat:mrnzlehq2zcfvgwnab2gvwf6om

UCP: Uniform Channel Pruning for Deep Convolutional Neural Networks Compression and Acceleration [article]

Jingfei Chang and Yang Lu and Ping Xue and Xing Wei and Zhen Wei
2020 arXiv   pre-print
To apply deep CNNs to mobile terminals and portable devices, many scholars have recently worked on the compressing and accelerating deep convolutional neural networks.  ...  For ResNet with bottlenecks, we use the pruning method with traditional CNN to trim the 3x3 convolutional layer in the middle of the blocks.  ...  We prune the traditional convolutional neural network VGGNet and ResNet with special structures.  ... 
arXiv:2010.01251v1 fatcat:tdsxmp4uwbb3dgo3xa6h7cf77q

Partial Multi-Label Learning via Multi-Subspace Representation

Ziwei Li, Gengyu Lyu, Songhe Feng
2020 Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence  
Partial Multi-Label Learning (PML) aims to learn from the training data where each instance is associated with a set of candidate labels, among which only a part of them are relevant.  ...  To tackle the problem, we propose a novel framework named partial multi-label learning via MUlti-SubspacE Representation (MUSER), where the redundant labels together with noisy features are jointly taken  ...  Introduction Network pruning has been a popular remedy for the overparameterization problem of deep neural networks [Liu et al.2018b] .  ... 
doi:10.24963/ijcai.2020/358 dblp:conf/ijcai/LuHDLS20 fatcat:qsldjfojpvbe3awuthsbk5uktm

NestedNet: Learning Nested Sparse Structures in Deep Neural Networks [article]

Eunwoo Kim, Chanho Ahn, Songhwai Oh
2018 arXiv   pre-print
In this work, we propose a novel deep learning framework, called a nested sparse network, which exploits an n-in-1-type nested structure in a neural network.  ...  Recently, there have been increasing demands to construct compact deep architectures to remove unnecessary redundancy and to improve the inference speed.  ...  [16] proposed a structured deep network that can enable model parallelization and a more compact model compared with previous hierarchical deep networks.  ... 
arXiv:1712.03781v2 fatcat:i4g4deecoraqjbqpfbc4cpmaz4

Compressing Deep Networks by Neuron Agglomerative Clustering

Li-Na Wang, Wenxue Liu, Xiang Liu, Guoqiang Zhong, Partha Pratim Roy, Junyu Dong, Kaizhu Huang
2020 Sensors  
However, high-performance deep architectures are often accompanied by a large storage space and long computational time, which make it difficult to fully exploit many deep neural networks (DNNs), especially  ...  Specifically, on the benchmark CIFAR-10 and CIFAR-100 datasets, using NAC to compress the parameters of the original VGGNet by 92.96% and 81.10%, respectively, the compact network obtained still outperforms  ...  Specifically, we applied NAC on three networks: a deep belief network (DBN) and two convolutional neural networks (CNNs).  ... 
doi:10.3390/s20216033 pmid:33114078 pmcid:PMC7660330 fatcat:hetsfskkifdlbhdgkbdh66ked4

Efficient Deep Learning in Network Compression and Acceleration [chapter]

Shiming Ge
2018 Digital Systems  
In this chapter, I will present a comprehensive survey of several advanced approaches for efficient deep learning in network compression and acceleration.  ...  While deep learning delivers state-of-the-art accuracy on many artificial intelligence tasks, it comes at the cost of high computational complexity due to large parameters.  ...  Later, they proposed a hybrid method, called Deep Compression [18] , to compress deep neural networks with pruning, quantization, and Huffman coding.  ... 
doi:10.5772/intechopen.79562 fatcat:ya65wwhk5neppgxrut5phd42dy

Fine-Pruning: Joint Fine-Tuning and Compression of a Convolutional Network with Bayesian Optimization [article]

Frederick Tung, Srikanth Muralidharan, Greg Mori
2017 arXiv   pre-print
When approaching a novel visual recognition problem in a specialized image domain, a common strategy is to start with a pre-trained deep neural network and fine-tune it to the specialized domain.  ...  adapt over time; and the highly parameterized nature of state-of-the-art pruning methods make it prohibitive to manually search the pruning parameter space for deep networks, leading to coarse approximations  ...  (c) Figure 1 : Consider the task of training a deep convolutional neural network on a specialized image domain (e.g. remote sensing images).  ... 
arXiv:1707.09102v1 fatcat:axuja5q77fc5hjdprxvbmwmdry

Self-grouping Convolutional Neural Networks [article]

Qingbei Guo and Xiao-Jun Wu and Josef Kittler and Zhiquan Feng
2020 arXiv   pre-print
Although group convolution operators are increasingly used in deep convolutional neural networks to improve the computational efficiency and to reduce the number of parameters, most existing methods construct  ...  To tackle this issue, we propose a novel method of designing self-grouping convolutional neural networks, called SG-CNN, in which the filters of each convolutional layer group themselves based on the similarity  ...  deep neural networks.  ... 
arXiv:2009.13803v1 fatcat:njxhz5amcnbtjbcjyeqfz5pfpi

Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers [article]

Junjie Liu, Zhe Xu, Runbin Shi, Ray C. C. Cheung, Hayden K.H. So
2020 arXiv   pre-print
We present a novel network pruning algorithm called Dynamic Sparse Training that can jointly find the optimal network parameters and sparse network structure in a unified optimization process with trainable  ...  We demonstrate that our dynamic sparse training algorithm can easily train very sparse neural network models with little performance loss using the same number of training epochs as dense models.  ...  due to the over-parameterization of deep neural networks.  ... 
arXiv:2005.06870v1 fatcat:vtvnnw2smvbf5cqxtyhcuyin2a

Fine-grained energy profiling for deep convolutional neural networks on the Jetson TX1

Crefeda Faviola Rodrigues, Graham Riley, Mikel Lujan
2017 2017 IEEE International Symposium on Workload Characterization (IISWC)  
We present a novel evaluation framework for measuring energy and performance for deep neural networks using ARMs Streamline Performance Analyser integrated with standard deep learning frameworks such as  ...  This is because deep neural networks are typically designed and trained on high-end GPUs or servers and require additional processing steps to deploy them on low power devices.  ...  Deep neural networks such as Convolutional Neural Networks (hereafter referred to as ConvNets) [2] are commonly employed in vision-based applications.  ... 
doi:10.1109/iiswc.2017.8167764 dblp:conf/iiswc/RodriguesRL17 fatcat:z4aqds63mnadndglj4u5nxvlwq

Recent Advances in Convolutional Neural Network Acceleration [article]

Qianru Zhang, Meng Zhang, Tinghuan Chen, Zhifei Sun, Yuzhe Ma, Bei Yu
2018 arXiv   pre-print
In recent years, convolutional neural networks (CNNs) have shown great performance in various fields such as image classification, pattern recognition, and multi-media compression.  ...  For example, Han et al. combine pruning with trained quantization and Huffman coding to deep compress the neural networks in three steps.  ...  In terms of economy, GPU costs to set up for large deep convolutional neural networks.  ... 
arXiv:1807.08596v1 fatcat:jx66ekaofjhqzdbaueal476bvi
« Previous Showing results 1 — 15 out of 808 results