A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is
In this paper, a novel smooth group L1/2 (SGL1/2) regularization method is proposed for pruning hidden nodes of the fully connected layer in convolution neural networks. ... The main contribution of SGL1/2 is to try to approximate the weights to 0 at the group level. Therefore, we will be able to prune the hidden node if the corresponding weights are all close to 0. ... It was shown that combining the L 1/2 regularization with the group lasso (GL 1/2 ) for feedforward neural networks can prune not only hidden nodes but also the redundant weights of the surviving hidden ...doi:10.3390/sym14010154 fatcat:d5d7odm5jza4ddtrrqgavk4i2m
 ,  proposed a single hidden layer feedforward neural network (SLFN) learning algorithm-Extreme learning machine (ELM). ... L 1/2 regularizer, which is more than the average number of hidden nodes pruned by the group L 1/2 and L 1 regularization methods. ...doi:10.1109/access.2020.3031647 fatcat:csmfl7fgc5e6ziwh4jhsu6bezq
Acknowledgments The authors would like to acknowledge Dr Thomas Pfau for technical help with the computations and Dr Jun Pang for valuable comments on the manuscript. ... Acknowledgments We thank TUD, CRTD, FACS and imaging facilities for support, advice, and technical assistance. ... X-axis (left to right): increasing the L1/2 regularization. Y-axis (top to bottom): increasing the L1 grouped regularization. ...doi:10.6084/m9.figshare.8191262 fatcat:pnk3svzclbgqxgbcnjd5fokzj4
We show generalisation error bounds for deep learning with two main improvements over the state of the art. (1) Our bounds have no explicit dependence on the number of classes except for logarithmic factors ... The presented bounds scale as the norms of the parameter matrices, rather than the number of parameters. ... For each l 1 , l 2 with l 2 > l 1 and each A l1,l2 = (A l1+1 , . . . , A l2 ) ∈ B l1,l2 := B l1+1 × B l1+2 × . . . ...arXiv:1905.12430v5 fatcat:4ygnmrbrsrbirkaiscwchqloqa