A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Binary Quantization Analysis of Neural Networks Weights on MNIST Dataset
2021
Elektronika ir Elektrotechnika
This paper considers the design of a binary scalar quantizer of Laplacian source and its application in compressed neural networks. The quantizer performance is investigated in a wide dynamic range of data variances, and for that purpose, we derive novel closed-form expressions. Moreover, we propose two selection criteria for the variance range of interest. Binary quantizers are further implemented for compressing neural network weights and its performance is analysed for a simple
doi:10.5755/j02.eie.28881
fatcat:bl77womljnh3ph6u3v6ihp2szm