A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Aggregated Learning: A Deep Learning Framework Based on Information-Bottleneck Vector Quantization
[article]
2019
arXiv
pre-print
Based on the notion of information bottleneck (IB), we formulate a quantization problem called "IB quantization". We show that IB quantization is equivalent to learning based on the IB principle. Under this equivalence, the standard neural network models can be viewed as scalar (single sample) IB quantizers. It is known, from conventional rate-distortion theory, that scalar quantizers are inferior to vector (multi-sample) quantizers. Such a deficiency then inspires us to develop a novel
arXiv:1807.10251v3
fatcat:7opuatzfknfh5ld2qat5y4qzuq