Structured Multi-Hashing for Model Compression

Elad Eban, Yair Movshovitz-Attias, Hao Wu, Mark Sandler, Andrew Poon, Yerlan Idelbayev, Miguel A. Carreira-Perpinan
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Despite the success of deep neural networks (DNNs), state-of-the-art models are too large to deploy on lowresource devices or common server configurations in which multiple models are held in memory. Model compression methods address this limitation by reducing the memory footprint, latency, or energy consumption of a model with minimal impact on accuracy. We focus on the task of reducing the number of learnable variables in the model. In this work we combine ideas from weight hashing and
more » ... ionality reductions resulting in a simple and powerful structured multi-hashing method based on matrix products that allows direct control of model size of any deep network and is trained end-to-end. We demonstrate the strength of our approach by compressing models from the ResNet, EfficientNet, and Mo-bileNet architecture families. Our method allows us to drastically decrease the number of variables while maintaining high accuracy. For instance, by applying our approach to EfficentNet-B4 (16M parameters) we reduce it to the size of B0 (5M parameters), while gaining over 3% in accuracy over B0 baseline. On the commonly used benchmark CIFAR10 we reduce the ResNet32 model by 75% with no loss in quality, and are able to do a 10x compression while still achieving above 90% accuracy. * The author contribute equally to this paper. Elad and Yair contributed equally to the paper. They jointly proposed the idea of structured-multi-hashing. Yair was the main contributor to the manuscript. Elad wrote most of the code and ran EfficentNet experiments. Hao contributed to coding and experiments. Yerlan ran CIFAR and ResNet experiments and simplified some aspects of the structured hashing. Miguel advised Yerlan on issues about optimization and deep net compression. Mark and Andrew helped with MobileNet, and ResNet experiments. † Work performed while at Google Research.
doi:10.1109/cvpr42600.2020.01192 dblp:conf/cvpr/EbanMWSPIC20 fatcat:dmax6v7ztzct5mh4ijvwba3ioy