A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Filters
Universal Distribution of Saliencies for Pruning in Layered Neural Networks
1997
International Journal of Neural Systems
We focus on two-layer networks with either a linear or nonlinear output unit, and obtain analytic expressions for the distribution of saliencies and their logarithms. ...
A better understanding of pruning methods based on a ranking of weights according to their saliency in a trained network requires further information on the statistical properties of such saliencies. ...
This research was supported by the Danish Research Councils for the Natural and Technical Sciences through the Danish Computational Neural Network Center (CONNECT) and the Danish National Research Foundation ...
doi:10.1142/s0129065797000471
fatcat:47bxt6gvyjdsresbfg4nejhlby
Optimizing Artificial Neural Networks using Cat Swarm Optimization Algorithm
2012
International Journal of Intelligent Systems and Applications
Experiments performed on benchmark datasets taken from the UCI machine learning repository show that the proposed CSONN-OBD is an effective tool for training neural networks. ...
An Artificial Neural Network (ANN) is an abstract representation of the biological nervous system which has the ability to solve many complex problems. ...
This work was supported by the University of the Philippines Visayas In-House Research Program under grant no. SP10-06. ...
doi:10.5815/ijisa.2013.01.07
fatcat:ywzdhunqijeb7lf6d56dnlbehm
Adaptive Dynamic Pruning for Non-IID Federated Learning
[article]
2021
arXiv
pre-print
However, the limited computing power and energy constraints of edge devices hinder the adoption of FL for both model training and deployment, especially for the resource-hungry Deep Neural Networks (DNNs ...
In this paper, we present an adaptive pruning scheme for edge devices in an FL system, which applies dataset-aware dynamic pruning for inference acceleration on Non-IID datasets. ...
The key idea of network pruning is to permanently remove deep neural networks' (DNNs) redundant weights by evaluating the saliency of neurons for the input data (Gao et al., 2019) . ...
arXiv:2106.06921v1
fatcat:4igg3t2h4ff45fjs5m4hmc5jei
Network Compression for Machine-Learnt Fluid Simulations
[article]
2021
arXiv
pre-print
In this study, we explore the applicability of pruning and quantization (FP32 to int8) methods for one such application relevant to modeling fluid turbulence. ...
For full physics emulators, the cost of network inference is often trivial. However, in the current paradigm of data-driven fluid mechanics models are built as surrogates for complex sub-processes. ...
Visualizing the distribution of network weights in a low-dimensional space for the same layer, using t-SNE [Van der Maaten & Hinton (2008)], shows a similar trend of overlapping regions of similarity, ...
arXiv:2103.00754v1
fatcat:blvmlfd6azgfbdzwi5jepmrloq
Pruning of recurrent neural models: an optimal brain damage approach
2018
Nonlinear dynamics
This paper considers the problem of pruning recurrent neural models of perceptron type with one hidden layer which may be used for modelling of dynamic system. ...
In order to reduce the number of model parameters (i.e. the number of weights), the Optimal Brain Damage (OBD) pruning algorithm is adopted for the recurrent neural models. ...
The above formulae are universal for the considered recurrent neural model. ...
doi:10.1007/s11071-018-4089-1
fatcat:anvoha6tfbga5oq4cmewz7ziee
Global Biased Pruning Considering Layer Contribution
2020
IEEE Access
Most existing methods for filter pruning only consider the role of the filter itself, ignoring the characteristics of the layer. ...
Convolutional neural networks (CNNs) have made impressive achievements in many areas, but these successes are limited by storage and computing costs. ...
INDEX TERMS deep learning, network pruning, convolutional neural networks
I. ...
doi:10.1109/access.2020.3025130
fatcat:jb23kba2nbbglhrvss5rigrgkq
Pruning Algorithms to Accelerate Convolutional Neural Networks for Edge Applications: A Survey
[article]
2020
arXiv
pre-print
With the general trend of increasing Convolutional Neural Network (CNN) model sizes, model compression and acceleration techniques have become critical for the deployment of these models on edge devices ...
The survey covers the overarching motivation for pruning, different strategies and criteria, their advantages and drawbacks, along with a compilation of major pruning techniques. ...
[53] used the scaling factor of the Batch Normalization layer as the saliency measure. ...
arXiv:2005.04275v1
fatcat:2w4d65rebjbvvpeddx6ygc6hge
Learning compact ConvNets through filter pruning based on the saliency of a feature map
2021
IET Image Processing
Among the methods mentioned in various literature, filter pruning is a crucial method for constructing lightweight networks. ...
With the performance increase of convolutional neural network (CNN), the disadvantages of CNN's high storage and high power consumption are followed. ...
Direction Team in Zhongyuan University of Technology. ...
doi:10.1049/ipr2.12338
fatcat:75u375ziqfgtbjjw4ifuoxlgdy
System Identification With General Dynamic Neural Networks And Network Pruning
2008
Zenodo
This paper presents an exact pruning algorithm with adaptive pruning interval for general dynamic neural networks (GDNN). GDNNs are artificial neural networks with internal dynamics. ...
During parameter optimization with the Levenberg- Marquardt (LM) algorithm irrelevant weights of the dynamic neural network are deleted in order to find a model for the plant as simple as possible. ...
CONCLUSION In this paper it is shown, that network pruning not only works in static neural networks but can also be applied to dynamic neural networks. ...
doi:10.5281/zenodo.1080759
fatcat:36tawnkq45dqxddwvduxjkpwqy
Overcoming Long-term Catastrophic Forgetting through Adversarial Neural Pruning and Synaptic Consolidation
[article]
2021
IEEE Transactions on Neural Networks and Learning Systems
accepted
Artificial neural networks face the well-known problem of catastrophic forgetting. ...
Inspired by the memory consolidation mechanism in mammalian brains with synaptic plasticity, we propose a confrontation mechanism in which Adversarial Neural Pruning and synaptic Consolidation (ANPyC) ...
a neural network to sequentially learn multiple tasks is of great significance for expanding the applicability of neural networks in realistic human application scenarios. ...
doi:10.1109/tnnls.2021.3056201
pmid:33577459
arXiv:1912.09091v2
fatcat:glic2itroraa7jpicjaamjljsu
Differentiable Network Pruning for Microcontrollers
[article]
2021
arXiv
pre-print
In this work, we present a differentiable structured network pruning method for convolutional neural networks, which integrates a model's MCU-specific resource usage and parameter importance feedback to ...
Orders of magnitude less storage, memory and computational capacity, compared to what is typically required to execute neural networks, impose strict structural constraints on the network architecture ...
SNIP: Single-shot
is the state of neural network pruning? ...
arXiv:2110.08350v2
fatcat:7jmlafbgzjexbbtdyslmlx6ig4
Methodological Challenges in Neural Spatial Interaction Modelling: The Issue of Model Selection
[chapter]
2000
Advances in Spatial Science
Rigorous mathematical proofs for the universality of such feedforward neural network models (see, among others, Hornik, Stinchcombe and White 1989 ) establish the neural spatial interaction models as ...
First, a summarized description of single hidden layer neural spatial interaction is given in the next section. ...
doi:10.1007/978-3-642-59787-9_6
fatcat:4aajvft76ff33algaho7iawo6m
Compression of Neural Machine Translation Models via Pruning
2016
Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning
for the different classes of weights in the NMT architecture. ...
We demonstrate the efficacy of weight pruning as a compression technique for a state-of-the-art NMT system. ...
Lastly, we acknowledge NVIDIA Corporation for the donation of Tesla K40 GPUs. ...
doi:10.18653/v1/k16-1029
dblp:conf/conll/SeeLM16
fatcat:iaexhxacdravvisc34lzqsygve
FreezeNet: Full Performance by Reduced Storage Costs
[article]
2020
arXiv
pre-print
Pruning generates sparse networks by setting parameters to zero. ...
In our experiments we show that FreezeNets achieve good results, especially for extreme freezing rates. ...
Backpropagation in Neural Networks To simplify the backpropagation formulas, we will deal with a feed-forward, fully connected neural network. Similar equations hold for convolutional layers [21] . ...
arXiv:2011.14087v1
fatcat:mazpvgnxxnaw7psj2gev7ovhde
Compression of Neural Machine Translation Models via Pruning
[article]
2016
arXiv
pre-print
for the different classes of weights in the NMT architecture. ...
We demonstrate the efficacy of weight pruning as a compression technique for a state-of-the-art NMT system. ...
Lastly, we acknowledge NVIDIA Corporation for the donation of Tesla K40 GPUs. ...
arXiv:1606.09274v1
fatcat:urda6y32wbbcjoii4ugpftvbwa
« Previous
Showing results 1 — 15 out of 919 results