A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
On the Universal Approximability and Complexity Bounds of Quantized ReLU Neural Networks
[article]
2019
arXiv
pre-print
Compression is a key step to deploy large neural networks on resource-constrained platforms. As a popular compression technique, quantization constrains the number of distinct weight values and thus reducing the number of bits required to represent and store each weight. In this paper, we study the representation power of quantized neural networks. First, we prove the universal approximability of quantized ReLU networks on a wide class of functions. Then we provide upper bounds on the number of
arXiv:1802.03646v4
fatcat:y4qjs2hirfhfnbvemctnhhwspm