JECL: Joint Embedding and Cluster Learning for Image-Text Pairs [article]

Sean T. Yang, Kuan-Hao Huang, Bill Howe
2020 arXiv   pre-print
We propose JECL, a method for clustering image-caption pairs by training parallel encoders with regularized clustering and alignment objectives, simultaneously learning both representations and cluster assignments. These image-caption pairs arise frequently in high-value applications where structured training data is expensive to produce, but free-text descriptions are common. JECL trains by minimizing the Kullback-Leibler divergence between the distribution of the images and text to that of a
more » ... ombined joint target distribution and optimizing the Jensen-Shannon divergence between the soft cluster assignments of the images and text. Regularizers are also applied to JECL to prevent trivial solutions. Experiments show that JECL outperforms both single-view and multi-view methods on large benchmark image-caption datasets, and is remarkably robust to missing captions and varying data sizes.
arXiv:1901.01860v3 fatcat:vqy4c2eejzedzajlxia44t6in4