Filters








5,420 Hits in 4.7 sec

Compressing Deep Networks by Neuron Agglomerative Clustering

Li-Na Wang, Wenxue Liu, Xiang Liu, Guoqiang Zhong, Partha Pratim Roy, Junyu Dong, Kaizhu Huang
2020 Sensors  
In this paper, to tackle this problem, we introduce a method for compressing the structure and parameters of DNNs based on neuron agglomerative clustering (NAC).  ...  Specifically, we utilize the agglomerative clustering algorithm to find similar neurons, while these similar neurons and the connections linked to them are then agglomerated together.  ...  Acknowledgments: The authors would like to thank the guest editors and the anonymous reviewers for their work and time on the publication of this paper.  ... 
doi:10.3390/s20216033 pmid:33114078 pmcid:PMC7660330 fatcat:hetsfskkifdlbhdgkbdh66ked4

Multi-Task Zipping via Layer-wise Neuron Sharing [article]

Xiaoxi He, Zimu Zhou, Lothar Thiele
2019 arXiv   pre-print
We propose Multi-Task Zipping (MTZ), a framework to automatically merge correlated, pre-trained deep neural networks for cross-model compression.  ...  two networks with <0.5% increase in the test errors for both tasks.  ...  Conclusion We propose MTZ, a framework to automatically merge multiple correlated, well-trained deep neural networks for cross-model compression via neuron sharing.  ... 
arXiv:1805.09791v2 fatcat:mtsfbjfuqzawlixggzczyw22tq

Diversity Networks: Neural Network Compression Using Determinantal Point Processes [article]

Zelda Mariet, Suvrit Sra
2017 arXiv   pre-print
We introduce Divnet, a flexible technique for learning networks with diverse neurons. Divnet models neuronal diversity by placing a Determinantal Point Process (DPP) over neurons in a given layer.  ...  We present experimental results to corroborate our claims: for pruning neural networks, Divnet is seen to be notably superior to competing approaches.  ...  These methods focus on deleting parameters whose removal influences the network the least, while Divnet seeks diversity and merges similar neurons; these methods can thus be used in conjunction with ours  ... 
arXiv:1511.05077v6 fatcat:l4laul3l7vfdleqthpqq3fmh64

Literature Review of Deep Network Compression

Ali Alqahtani, Xianghua Xie, Mark W. Jones
2021 Informatics  
In this paper, we present an overview of popular methods and review recent works on compressing and accelerating deep neural networks.  ...  Deep networks often possess a vast number of parameters, and their significant redundancy in parameterization has become a widely-recognized property.  ...  [39] introduced Divnet, which selects a subset of diverse neurons and merges similar neurons into one.  ... 
doi:10.3390/informatics8040077 fatcat:u2dzzibapnf2dbjdqkvgl3pztu

Neural Behavior-Based Approach for Neural Network Pruning

Koji KAMMA, Yuki ISODA, Sarimu INOUE, Toshikazu WADA
2020 IEICE transactions on information and systems  
neurons having the similar behavioral vectors.  ...  Therefore, the proposed method can reduce the number of neurons with small sacrifice of accuracy without retraining. Our method can be applied for compressing convolutional layers as well.  ...  Introduction Deep neural networks (DNNs) have been showing dominant performances in the machine learning tasks. The key is the scale of the models.  ... 
doi:10.1587/transinf.2019edp7177 fatcat:uawh24gzobhenhspsezyrg3qxm

RED++ : Data-Free Pruning of Deep Neural Networks via Input Splitting and Output Merging [article]

Edouard Yvinec, Arnaud Dapogny, Matthieu Cord, Kevin Bailly
2021 arXiv   pre-print
Pruning Deep Neural Networks (DNNs) is a prominent field of study in the goal of inference runtime acceleration. In this paper, we introduce a novel data-free pruning protocol RED++.  ...  Only requiring a trained neural network, and not specific to DNN architecture, we exploit an adaptive data-free scalar hashing which exhibits redundancies among neuron weight values.  ...  The merged network weightsW are obtained by merging the first 2 neurons of layer l and updating the consecutive layer by simply summing the corresponding weights.  ... 
arXiv:2110.01397v1 fatcat:qhy4hn6bifhdjg53v43ztstqx4

Incremental Layers Resection: A Novel Method to Compress Neural Networks

Xiang Liu, Li-Na Wang, Wenxue Liu, Guoqiang Zhong, Junyu Dong
2019 IEEE Access  
In recent years, deep neural networks (DNNs) have been widely applied in many areas, such as computer vision and pattern recognition.  ...  Extensive experiments demonstrate that, compared to the original networks, the compressed ones by ILR need only about half of the storage space and have higher inference speed.  ...  In [19] , a similar neurons merging algorithm has been proposed to merge the neurons with similar performance in the same layer.  ... 
doi:10.1109/access.2019.2952615 fatcat:ipm6yjvhkzfvzcmmyavm2wxgoq

DeepAbstract: Neural Network Abstraction for Accelerating Verification [article]

Pranav Ashok and Vahid Hashemi and Jan Křetínský and Stefanie Mohr
2020 arXiv   pre-print
While abstraction is a classic tool of verification to scale it up, it is not used very often for verifying neural networks.  ...  We introduce an abstraction framework applicable to fully-connected feed-forward neural networks based on clustering of neurons that behave similarly on some inputs.  ...  Neurons 1-7 and 9 are unaffected by the merging procedure. For neuron 11, we havel 11 = 5 andũ 11 = 13, and for neuron 12,l 12 = 0 andũ 12 = 4.  ... 
arXiv:2006.13735v1 fatcat:piex3f2zmjgrncwiflqrxx5fxa

Deep Autoencoder-Based Image Compression using Multi-Layer Perceptrons

2020 International journal of soft computing and engineering  
In this research, a deep autoencoder-based multi-layer feed-forward neural network has been proposed to achieve image compression.  ...  The Artificial Neural Network is one of the heavily used alternatives for solving complex problems in machine learning and deep learning.  ...  In this research, we used a deep neural network that consists of multiple hidden layers. When it comes to image compression, deep learning is heavily used.  ... 
doi:10.35940/ijsce.e3357.039620 fatcat:o3qzl5ufmzhobg5h7gqofedxpe

Deep Learning in Biological Data Analysis

Jingyu Guo
2017 MOJ Proteomics & Bioinformatics  
Then the downstream layers of deep neural network can learn from such similarity information and draw patterns from the transformed datasets.  ...  Feature learning With its deep structures, the low (beginning or closer to input) neuron layers of a deep neural network may not directly perform a supervised learning task.  ... 
doi:10.15406/mojpb.2017.05.00148 fatcat:gauid6m36vfhjociw3fbtdw2g4

Go Wide, Then Narrow: Efficient Training of Deep Thin Networks [article]

Denny Zhou, Mao Ye, Chen Chen, Tianjian Meng, Mingxing Tan, Xiaodan Song, Quoc Le, Qiang Liu, Dale Schuurmans
2020 arXiv   pre-print
In this paper, we propose an efficient method to train a deep thin network with a theoretic guarantee. Our method is motivated by model compression. It consists of three stages.  ...  For deploying a deep learning model into production, it needs to be both accurate and compact to meet the latency and memory constraints.  ...  In this paper, we propose a generic algorithm to train deep thin networks with a theoretical guarantee. Our method is motivated by model compression.  ... 
arXiv:2007.00811v2 fatcat:zz4tc7tmi5e6rmvd66hulk3ri4

Compressing Deep Neural Networks via Layer Fusion [article]

James O' Neill, Greg Ver Steeg, Aram Galstyan
2020 arXiv   pre-print
From experiments on CIFAR-10, we find that various deep convolution neural networks can remain within 2% accuracy points of the original networks up to a compression ratio of 3.33 when iteratively retrained  ...  For experiments on the WikiText-2 language modelling dataset where pretrained transformer models are used, we achieve compression that leads to a network that is 20% of its original size while being within  ...  We find that merging the most similar layers during the retraining process of already deep pretrained neural network leads to competitive performance when compared against the original network, while maintaining  ... 
arXiv:2007.14917v1 fatcat:7j3gh3gudbgm7escljljdrzjme

Complexity Reduction of Learned In-Loop Filtering in Video Coding [article]

Woody Bayliss, Luka Murn, Ebroul Izquierdo, Qianni Zhang, Marta Mrak
2022 arXiv   pre-print
Through initial tests we find that network parameters can be significantly reduced with a minimal impact on network performance.  ...  This is done through a three-step training process of magnitude-guidedweight pruning, insignificant neuron identification and removal, and fine-tuning.  ...  ACKNOWLEDGEMENTS This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant 2246465 for Queen Mary University of London, Multimedia and Vision Group (MMV), and the  ... 
arXiv:2203.08650v2 fatcat:2llbwxyjl5fpvfmor67xe4ixcq

Deep Epitome for Unravelling Generalized Hamming Network: A Fuzzy Logic Interpretation of Deep Learning [article]

Lixin Fan
2017 arXiv   pre-print
For network visualization, the constructed deep epitomes at each layer provide a visualization of network internal representation that does not rely on the input data.  ...  Moreover, deep epitomes allows the direct extraction of features in just one step, without resorting to regularized optimizations used in existing visualization tools.  ...  Then the negative GHD is used to quantify the similarity between neuron inputs x and weights w: −g(w, x) = 2 L w · x − 1 L L l=1 w l − 1 L L l=1 x l , (1) in which L denotes the length of neuron weights  ... 
arXiv:1711.05397v1 fatcat:gdqvsja4xjdibdhimgmkgtvfc4

Lossless Compression of Structured Convolutional Models via Lifting [article]

Gustav Sourek, Filip Zelezny, Ondrej Kuzelka
2021 arXiv   pre-print
We demonstrate through experiments that such compression can lead to significant speedups of structured convolutional models, such as various Graph Neural Networks, across various tasks, such as molecule  ...  Inspired by lifting, we introduce a simple and efficient technique to detect the symmetries and compress the neural models without loss of any information.  ...  Algorithm 1 Neural network compression based on detecting functional symmetries 1: function COMPRESS (Network) 2: N ← neurons (Network) 3: N ← topologicOrder (N ) 4: W ← weights (Network) 5: M  ... 
arXiv:2007.06567v2 fatcat:3ea3dzdeufd4bjnn5bsjwf5cgi
« Previous Showing results 1 — 15 out of 5,420 results