A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Where to Prune: Using LSTM to Guide End-to-end Pruning
2018
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Recent years have witnessed the great success of convolutional neural networks (CNNs) in many related fields. However, its huge model size and computation complexity bring in difficulty when deploying CNNs in some scenarios, like embedded system with low computation power. To address this issue, many works have been proposed to prune filters in CNNs to reduce computation. However, they mainly focus on seeking which filters are unimportant in a layer and then prune filters layer by layer or
doi:10.24963/ijcai.2018/445
dblp:conf/ijcai/ZhongDGHW18
fatcat:n2pxlqmi5ve47a6w6tbu54sup4