Filters








730 Hits in 3.8 sec

SiPPing Neural Networks: Sensitivity-informed Provable Pruning of Neural Networks [article]

Cenk Baykal, Lucas Liebenwein, Igor Gilitschenski, Dan Feldman, Daniela Rus
2021 arXiv   pre-print
We introduce a pruning algorithm that provably sparsifies the parameters of a trained model in a way that approximately preserves the model's predictive accuracy.  ...  Our pruning method is simultaneously computationally efficient, provably accurate, and broadly applicable to various network architectures and data distributions.  ...  Conclusion In this work, we presented a simultaneously provably and practical family of network pruning methods, SiPP, that is grounded in a data-informed measure of sensitivity.  ... 
arXiv:1910.05422v2 fatcat:pcabdz5rxjgdjjvlqeyso26axa

Provable Filter Pruning for Efficient Neural Networks [article]

Lucas Liebenwein, Cenk Baykal, Harry Lang, Dan Feldman, Daniela Rus
2020 arXiv   pre-print
In contrast to existing filter pruning approaches, our method is simultaneously data-informed, exhibits provable guarantees on the size and performance of the pruned network, and is widely applicable to  ...  We present a provable, sampling-based approach for generating compact Convolutional Neural Networks (CNNs) by identifying and removing redundant filters from an over-parameterized network.  ...  Sipping neural networks: Sensitivity-informed provable pruning of neural networks. arXiv preprint arXiv:1910.05422, 2019b. Vladimir Braverman, Dan Feldman, and Harry Lang.  ... 
arXiv:1911.07412v2 fatcat:l5drcoblgvdxfcksho5g7inhue

Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection [article]

Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, Qiang Liu
2020 arXiv   pre-print
However, most existing methods of network pruning are empirical and heuristic, leaving it open whether good subnetworks provably exist, how to find them efficiently, and if network pruning can be provably  ...  Practically, we improve prior arts of network pruning on learning compact neural architectures on ImageNet, including ResNet, MobilenetV2/V3, and ProxylessNet.  ...  Sipping neural networks: Sensitivity- informed provable pruning of neural networks. arXiv preprint arXiv:1910.05422, 2019b. Cai, H., Zhu, L., and Han, S.  ... 
arXiv:2003.01794v3 fatcat:wsfcq6em4zd4domik3sp7ltdk4

Membership Inference Attacks and Defenses in Neural Network Pruning [article]

Xiaoyong Yuan, Lan Zhang
2022 arXiv   pre-print
In this paper, we conduct the first analysis of privacy risks in neural network pruning.  ...  Specifically, we investigate the impacts of neural network pruning on training data privacy, i.e., membership inference attacks.  ...  Any internal information about the pruned model MIA against Neural Network Pruning Given the workflow of neural network pruning presented in Section 3.1, this section focuses on investigating the privacy  ... 
arXiv:2202.03335v2 fatcat:fmkh6svadbbnfov7bdlx225exy

i-SpaSP: Structured Neural Pruning via Sparse Signal Recovery [article]

Cameron R. Wolfe, Anastasios Kyrillidis
2022 arXiv   pre-print
to discover high-performing sub-networks and improve upon the pruning efficiency of provable baseline methodologies by several orders of magnitude.  ...  We propose a novel, structured pruning algorithm for neural networks -- the iterative, Sparse Structured Pruning algorithm, dubbed as i-SpaSP.  ...  As such, i-SpaSP is a practical, provable, and efficient algorithm that we hope will enable a better understanding of neural network pruning both in theory and practice.  ... 
arXiv:2112.04905v2 fatcat:soq2ypi5rff2nnefba6a7v6fkm

Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy [article]

Lucas Liebenwein, Cenk Baykal, Brandon Carter, David Gifford, Daniela Rus
2021 arXiv   pre-print
Neural network pruning is a popular technique used to reduce the inference costs of modern, potentially overparameterized, networks.  ...  Across evaluations on varying architectures and data sets, we find that pruned networks effectively approximate the unpruned model, however, the prune ratio at which pruned networks achieve commensurate  ...  National Science Foundation (NSF) under Award 1723943, Office of Naval Research (ONR) Grant N00014-18-1-2830, and JP Morgan Chase. We thank them for their support.  ... 
arXiv:2103.03014v1 fatcat:433gv2x6lbgrtf5s6myfcbruom

ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers [article]

Husheng Han, Kaidi Xu, Xing Hu, Xiaobing Chen, Ling Liang, Zidong Du, Qi Guo, Yanzhi Wang, Yunji Chen
2021 arXiv   pre-print
In this work, we propose the certified defense methodology that achieves high provable robustness for high-resolution images and largely improves the practicality for real adoption of the certified defense  ...  The basic insight of our work is that the adversarial patch intends to leverage localized superficial important neurons (SIN) to manipulate the prediction results.  ...  Acknowledgements This work is partially supported by the Beijing Natural Science Foundation (JQ18013), the NSF of China(under Grants 61925208, 62002338, U19B2019), Beijing Academy of Artificial Intelligence  ... 
arXiv:2110.14120v2 fatcat:r5qbtibzgze6nmlyd4njkiq72y

Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is Enough [article]

Mao Ye, Lemeng Wu, Qiang Liu
2020 arXiv   pre-print
However, the theoretical question of how much we can prune a neural network given a specified tolerance of accuracy drop is still open.  ...  The proposed method has the guarantee that the discrepancy between the pruned network and the original network decays with exponentially fast rate w.r.t. the size of the pruned network, under weak assumptions  ...  Sipping neural networks: Sensitivity-informed provable pruning of neural networks. arXiv preprint arXiv:1910.05422, 2019b. Chin, Ting-Wu, Ding, Ruizhou, Zhang, Cha, and Marculescu, Diana.  ... 
arXiv:2010.15969v1 fatcat:dskczrvzx5e4pjq5ngcjujilua

End-to-End Sensitivity-Based Filter Pruning [article]

Zahra Babaiee and Lucas Liebenwein and Ramin Hasani and Daniela Rus and Radu Grosu
2022 arXiv   pre-print
In this paper, we present a novel sensitivity-based filter pruning algorithm (SbF-Pruner) to learn the importance scores of filters of each layer end-to-end.  ...  Moreover, by training the pruning scores of all layers simultaneously our method can account for layer interdependencies, which is essential to find a performant sparse sub-network.  ...  Neural network pruning is defined as systematically removing parameters from an existing neural network (Hoefler et al., 2021) .  ... 
arXiv:2204.07412v1 fatcat:22jvrrbzezfujg6pao43xryhli

Efficient Inference via Universal LSH Kernel [article]

Zichang Liu, Benjamin Coleman, Anshumali Shrivastava
2021 arXiv   pre-print
A neural network function is transformed to its weighted kernel density representation, which can be very efficiently estimated with our sketching algorithm.  ...  In this work, we propose mathematically provable Representer Sketch, a concise set of count arrays that can approximate the inference procedure with simple hashing computations and aggregations.  ...  The first line of work focuses on reducing the number of connections by pruning neural networks [14, 15, 16, 17] .  ... 
arXiv:2106.11426v1 fatcat:txiubsptezh4zkf3twdjifehhu

Pruning Neural Networks at Initialization: Why are We Missing the Mark? [article]

Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M. Roy, Michael Carbin
2021 arXiv   pre-print
Recent work has explored the possibility of pruning neural networks at initialization.  ...  As such, the per-weight pruning decisions made by these methods can be replaced by a per-layer choice of the fraction of weights to prune.  ...  In Advances in neural information processing systems, pp. 598-605, 1990. Namhoon Lee, Thalaiyasingam Ajanthan, and Philip Torr. SNIP: Single-shot network pruning based on connection sensitivity.  ... 
arXiv:2009.08576v2 fatcat:ki5se6xa3ranzpc5cbj3o5iy3i

Network Pruning That Matters: A Case Study on Retraining Variants [article]

Duong H. Le, Binh-Son Hua
2021 arXiv   pre-print
Network pruning is an effective method to reduce the computational expense of over-parameterized neural networks for deployment on low-resource systems.  ...  Our results emphasize the cruciality of the learning rate schedule in pruned network retraining - a detail often overlooked by practitioners during the implementation of network pruning.  ...  pruned network via PFEC + CLR and Provable Filters Pruning (PFP) on ImageNet.  ... 
arXiv:2105.03193v1 fatcat:2beuippssffstjvhl3gpzdcsbu

Shapley Value as Principled Metric for Structured Network Pruning [article]

Marco Ancona and Cengiz Öztireli and Markus Gross
2020 arXiv   pre-print
Structured pruning is a well-known technique to reduce the storage size and inference cost of neural networks.  ...  In this case, reducing the harm caused by pruning becomes crucial to retain the performance of the network.  ...  Acknowledgments and Disclosure of Funding The authors would like to thank Dr. Tobias Günther for the support and useful discussion. References  ... 
arXiv:2006.01795v1 fatcat:2mlul3rugrhhvkk7llndh6aikq

PHEW: Constructing Sparse Networks that Learn Fast and Generalize Well without Training Data [article]

Shreyas Malakarjun Patil, Constantine Dovrolis
2021 arXiv   pre-print
Our work is based on a recently proposed decomposition of the Neural Tangent Kernel (NTK) that has decoupled the dynamics of the training process into a data-dependent component and an architecture-dependent  ...  That work has shown how to design sparse neural networks for faster convergence, without any training data, using the Synflow-L2 algorithm.  ...  Acknowledgements This work is supported by the National Science Foundation (Award: 2039741) and by the Lifelong Learning Machines (L2M) program of DARPA/MTO (Cooperative Agreement HR0011-18-2-0019).  ... 
arXiv:2010.11354v2 fatcat:4xfvspxlo5bt3nvjyjotzhnep4

Deep Learning on a Data Diet: Finding Important Examples Early in Training [article]

Mansheej Paul, Surya Ganguli, Gintare Karolina Dziugaite
2021 arXiv   pre-print
Based on this, we propose data pruning methods which use only local information early in training, and connect them to recent work that prunes data by discarding examples that are rarely forgotten over  ...  The recent success of deep learning has partially been driven by training increasingly overparametrized networks on ever larger datasets.  ...  neural networks, and the role of data.  ... 
arXiv:2107.07075v1 fatcat:a264cmv675btxaryhtpticxke4
« Previous Showing results 1 — 15 out of 730 results