A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Smooth Group L1/2 Regularization for Pruning Convolutional Neural Networks
2022
Symmetry
In this paper, a novel smooth group L1/2 (SGL1/2) regularization method is proposed for pruning hidden nodes of the fully connected layer in convolution neural networks. ...
The main contribution of SGL1/2 is to try to approximate the weights to 0 at the group level. Therefore, we will be able to prune the hidden node if the corresponding weights are all close to 0. ...
It was shown that combining the L 1/2 regularization with the group lasso (GL 1/2 ) for feedforward neural networks can prune not only hidden nodes but also the redundant weights of the surviving hidden ...
doi:10.3390/sym14010154
fatcat:d5d7odm5jza4ddtrrqgavk4i2m
Regression and Multiclass Classification Using Sparse Extreme Learning Machine via Smoothing Group L1/2 Regularizer
2020
IEEE Access
[1] , [2] proposed a single hidden layer feedforward neural network (SLFN) learning algorithm-Extreme learning machine (ELM). ...
L 1/2 regularizer, which is more than the average number of hidden nodes pruned by the group L 1/2 and L 1 regularization methods. ...
doi:10.1109/access.2020.3031647
fatcat:csmfl7fgc5e6ziwh4jhsu6bezq
Optimization of logical networks for the modeling of cancer signaling pathways
2019
Figshare
Acknowledgments The authors would like to acknowledge Dr Thomas Pfau for technical help with the computations and Dr Jun Pang for valuable comments on the manuscript. ...
Acknowledgments We thank TUD, CRTD, FACS and imaging facilities for support, advice, and technical assistance. ...
X-axis (left to right): increasing the L1/2 regularization. Y-axis (top to bottom): increasing the L1 grouped regularization. ...
doi:10.6084/m9.figshare.8191262
fatcat:pnk3svzclbgqxgbcnjd5fokzj4
Norm-based generalisation bounds for multi-class convolutional neural networks
[article]
2021
arXiv
pre-print
We show generalisation error bounds for deep learning with two main improvements over the state of the art. (1) Our bounds have no explicit dependence on the number of classes except for logarithmic factors ...
The presented bounds scale as the norms of the parameter matrices, rather than the number of parameters. ...
For each l 1 , l 2 with l 2 > l 1 and each A l1,l2 = (A l1+1 , . . . , A l2 ) ∈ B l1,l2 := B l1+1 × B l1+2 × . . . ...
arXiv:1905.12430v5
fatcat:4ygnmrbrsrbirkaiscwchqloqa