A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Pareto-Optimal Clustering with the Primal Deterministic Information Bottleneck
2022
Entropy
At the heart of both lossy compression and clustering is a trade-off between the fidelity and size of the learned representation. Our goal is to map out and study the Pareto frontier that quantifies this trade-off. We focus on the optimization of the Deterministic Information Bottleneck (DIB) objective over the space of hard clusterings. To this end, we introduce the primal DIB problem, which we show results in a much richer frontier than its previously studied Lagrangian relaxation when
doi:10.3390/e24060771
pmid:35741492
pmcid:PMC9222302
fatcat:gezn4juidrfg7f5ozwzzt47xp4