A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
DessiLBI: Exploring Structural Sparsity of Deep Networks via Differential Inclusion Paths
[article]
2020
arXiv
pre-print
Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error. However, compressive networks are desired in many real world applications and direct training of small networks may be trapped in local optima. In this paper, instead of pruning or distilling over-parameterized models to compressive ones, we propose a new approach based on differential inclusions of inverse scale spaces.
arXiv:2007.02010v1
fatcat:sg2ozh6fijeqpmeuhovip6g6ee