A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Cross-Modal Self-Taught Hashing for large-scale image retrieval
2016
Signal Processing
Cross-modal hashing integrates the advantages of traditional cross-modal retrieval and hashing, it can solve large-scale cross-modal retrieval effectively and efficiently. However, existing cross-modal hashing methods rely on either labeled training data, or lack semantic analysis. In this paper, we propose Cross-Modal Self-Taught Hashing (CMSTH) for large-scale cross-modal and unimodal image retrieval. CMSTH can effectively capture the semantic correlation from unlabeled training data. Its
doi:10.1016/j.sigpro.2015.10.010
fatcat:2lkzoss4jrdhlgodxfi743gnzi