Filters








21,119 Hits in 4.3 sec

What is the State of Neural Network Pruning? [article]

Davis Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, John Guttag
2020 arXiv   pre-print
Neural network pruning---the task of reducing the size of a network by removing parameters---has been the subject of a great deal of work in recent years.  ...  This deficiency is substantial enough that it is hard to compare pruning techniques to one another or determine how much progress the field has made over the past three decades.  ...  This research was supported by the Qualcomm Innovation Fellowship, the "la Caixa" Foundation Fellowship, Quanta Computer, and Wistron Corporation.  ... 
arXiv:2003.03033v1 fatcat:vethdh56xvap5c3dnwitq6cmvu

Partition Pruning: Parallelization-Aware Pruning for Deep Neural Networks [article]

Sina Shahhosseini, Ahmad Albaqsami, Masoomeh Jasemi, Nader Bagherzadeh
2019 arXiv   pre-print
Parameters of recent neural networks require a huge amount of memory. These parameters are used by neural networks to perform machine learning tasks when processing inputs.  ...  We evaluated the performance and energy consumption of parallel inference of partitioned models, which showed a 7.72x speed up of performance and a 2.73x reduction in the energy used for computing pruned  ...  Initially, the neural networks are trained and evaluated on a TinyImageNet dataset, as shown in Table 2 . Convolutional neural networks represent the state-of-the-art in image classification.  ... 
arXiv:1901.11391v2 fatcat:wyl7n43oubfndazcse7t44lici

Prune Responsibly [article]

Michela Paganini
2020 arXiv   pre-print
around neural network pruning.  ...  In response to the calls for quantitative evaluation of AI models to be population-aware, we present neural network pruning as a tangible application domain where the ways in which accuracy-efficiency  ...  As shown in Fig. 3a , this is the state high-sparsity pruned networks tend to.  ... 
arXiv:2009.09936v1 fatcat:imehfgs46raexf2tlzr6liwm3e

Connection pruning with static and adaptive pruning schedules

Lutz Prechelt
1997 Neurocomputing  
Neural network pruning methods on the level of individual network parameters e.g. connection weights can improve generalization, as is shown in this empirical study.  ...  However, an open problem in the pruning methods known today e.g. OBD, OBS, autoprune, epsiprune is the selection of the number of parameters to be removed in each pruning step pruning strength.  ...  Availability of Raw Data The datasets used in the experiments are from the Proben1 benchmark collection, which is available for anonymous ftp from ftp: ftp.ira.uka.de pub neuron proben1.tar.gz and also  ... 
doi:10.1016/s0925-2312(96)00054-9 fatcat:utcj7phaifadznamywvlqgcypa

Pruning from Scratch

Yulong Wang, Xiaolu Zhang, Lingxi Xie, Jun Zhou, Hang Su, Bo Zhang, Xiaolin Hu
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Network pruning is an important research field aiming at reducing computational costs of neural networks.  ...  Our results facilitate the community to rethink the effectiveness of existing techniques used for network pruning.  ...  The overall pipeline is depicted in Figure 1 In what follows, we show that in the common pipeline of network pruning, the role of pre-training is quite different from what we used to think.  ... 
doi:10.1609/aaai.v34i07.6910 fatcat:i4lwvfocevgghalfmqxadjqel4

Dirichlet Pruning for Neural Network Compression [article]

Kamil Adamczewski, Mijung Park
2021 arXiv   pre-print
We introduce Dirichlet pruning, a novel post-processing technique to transform a large neural network model into a compressed one.  ...  Dirichlet pruning is a form of structured pruning that assigns the Dirichlet distribution over each layer's channels in convolutional layers (or neurons in fully-connected layers) and estimates the parameters  ...  Compression The goal of the neural network compression is to decrease the size of the network in such a way that the slimmer network which is a subset of the larger network retains the original performance  ... 
arXiv:2011.05985v3 fatcat:pop4jgsfebdprlpxqufvnxgrga

Pruning from Scratch [article]

Yulong Wang, Xiaolu Zhang, Lingxi Xie, Jun Zhou, Hang Su, Bo Zhang, Xiaolin Hu
2019 arXiv   pre-print
Network pruning is an important research field aiming at reducing computational costs of neural networks.  ...  Our results facilitate the community to rethink the effectiveness of existing techniques used for network pruning.  ...  The overall pipeline is depicted in Figure 1 In what follows, we show that in the common pipeline of network pruning, the role of pre-training is quite different from what we used to think.  ... 
arXiv:1909.12579v1 fatcat:f2guvau2yvdcfnvptj2el5e7oy

Perchance to Prune

Giulio Tononi, Chiara Cirelli
2013 Scientific American  
What is the point of this unceasing activity at a time when we are suppos edly resting?  ...  Their research on the function of sleep is part of a larger investigation of human consciousness, the subject of Tononi's recent book Phi: A Voyage from the Brain to the Soul (Pantheon, 2012).  ...  state the billions of neural connections that get modified every day by the events of wak ing life.  ... 
doi:10.1038/scientificamerican0813-34 pmid:23923204 fatcat:yaz7ellmsze5hhmnvckt2bdvza

Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning [article]

Seul-Ki Yeom, Philipp Seegerer, Sebastian Lapuschkin, Alexander Binder, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek
2020 arXiv   pre-print
The success of convolutional neural networks (CNNs) in various applications is accompanied by a significant increase in computation and parameter storage costs.  ...  Notably, our novel criterion is not only competitive or better compared to state-of-the-art pruning criteria when successive retraining is performed, but clearly outperforms these previous criteria in  ...  Note that overparametrization is helpful for an efficient and successful training of neural networks, however, once the trained and well generalizing network structure is established, pruning can help  ... 
arXiv:1912.08881v2 fatcat:igbedbkz5vb7ldcfluk7qkbxle

Self-Adaptive Network Pruning [article]

Jinting Chen, Zhaocheng Zhu, Cheng Li, Yuming Zhao
2019 arXiv   pre-print
Extensive experiments on 2 datasets and 3 backbones show that SANP surpasses state-of-the-art methods in both classification accuracy and pruning rate.  ...  Deep convolutional neural networks have been proved successful on a wide range of tasks, yet they are still hindered by their large computation cost in many industrial scenarios.  ...  .5.2 WHAT IS THE DISTRIBUTION OF PRUNED CHANNELS?  ... 
arXiv:1910.08906v1 fatcat:5asmrpv5a5cuvirmu7gdfg3dwi

Pruning at a Glance: Global Neural Pruning for Model Compression [article]

Abdullah Salama, Oleksiy Ostapenko, Tassilo Klein, Moin Nabi
2019 arXiv   pre-print
To address these limitations, we propose a novel and simple pruning method that compresses neural networks by removing entire filters and neurons according to a global threshold across the network without  ...  Our method does not only exhibit good performance but what is more also easy to implement.  ...  , the compression percentages directly converts to reduction in computation needed by the model, as the computational power of a neural networks model is dominated by the computational requirements of  ... 
arXiv:1912.00200v2 fatcat:h65bjw65obhyhmbcioh63fp3dm

Differentiable Network Pruning for Microcontrollers [article]

Edgar Liberis, Nicholas D. Lane
2021 arXiv   pre-print
Orders of magnitude less storage, memory and computational capacity, compared to what is typically required to execute neural networks, impose strict structural constraints on the network architecture  ...  In this work, we present a differentiable structured network pruning method for convolutional neural networks, which integrates a model's MCU-specific resource usage and parameter importance feedback to  ...  What Lee, N., Ajanthan, T., and Torr, P. H. SNIP: Single-shot is the state of neural network pruning?  ... 
arXiv:2110.08350v2 fatcat:7jmlafbgzjexbbtdyslmlx6ig4

Sparse Flows: Pruning Continuous-depth Models [article]

Lucas Liebenwein, Ramin Hasani, Alexander Amini, Daniela Rus
2021 arXiv   pre-print
Moreover, pruning finds efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.  ...  We empirically show that the improvement is because pruning helps avoid mode-collapse and flatten the loss surface.  ...  The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Air  ... 
arXiv:2106.12718v2 fatcat:zmb6kyshzrd7hehjsutdf3glce

APP: Anytime Progressive Pruning [article]

Diganta Misra, Bharat Runwal, Tianlong Chen, Zhangyang Wang, Irina Rish
2022 arXiv   pre-print
In this paper, we explore the problem of training a neural network with a target sparsity in a particular case of online learning: the anytime learning at macroscale paradigm (ALMA).  ...  We propose a novel way of progressive pruning, referred to as Anytime Progressive Pruning (APP); the proposed approach significantly outperforms the baseline dense and Anytime OSP models across multiple  ...  Acknowledgements The authors express their sincere gratitude to Gintare Karolina Dziugaite (Google Brain) and Himanshu Arora (Workday) for providing valuable initial feedback in refining the idea, and  ... 
arXiv:2204.01640v1 fatcat:aaleqmkslfgtpdddmpsydxyki4

Adversarial Neural Pruning with Latent Vulnerability Suppression [article]

Divyam Madaan, Jinwoo Shin, Sung Ju Hwang
2020 arXiv   pre-print
We validate our Adversarial Neural Pruning with Vulnerability Suppression (ANP-VS) method on multiple benchmark datasets, on which it not only obtains state-of-the-art adversarial robustness but also improves  ...  the performance on clean examples, using only a fraction of the parameters used by the full network.  ...  Acknowledgements We thank the anonymous reviewers for their insightful comments and suggestions. We are also grateful to the authors of Lee et al. (2018)  ... 
arXiv:1908.04355v4 fatcat:piwkckxbi5fxrnk75vylcytt64
« Previous Showing results 1 — 15 out of 21,119 results