Aggregated Learning: A Deep Learning Framework Based on Information-Bottleneck Vector Quantization [article]

Hongyu Guo, Yongyi Mao, Ali Al-Bashabsheh, Richong Zhang
2019 arXiv   pre-print
Based on the notion of information bottleneck (IB), we formulate a quantization problem called "IB quantization". We show that IB quantization is equivalent to learning based on the IB principle. Under this equivalence, the standard neural network models can be viewed as scalar (single sample) IB quantizers. It is known, from conventional rate-distortion theory, that scalar quantizers are inferior to vector (multi-sample) quantizers. Such a deficiency then inspires us to develop a novel
more » ... framework, AgrLearn, that corresponds to vector IB quantizers for learning with neural networks. Unlike standard networks, AgrLearn simultaneously optimizes against multiple data samples. We experimentally verify that AgrLearn can result in significant improvements when applied to several current deep learning architectures for image recognition and text classification. We also empirically show that AgrLearn can reduce up to 80% of the training samples needed for ResNet training.
arXiv:1807.10251v3 fatcat:7opuatzfknfh5ld2qat5y4qzuq