A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2018; you can also visit the original URL.
The file type is application/pdf
.
Cross-modal Common Representation Learning by Hybrid Transfer Network
2017
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
DNN-based cross-modal retrieval is a research hotspot to retrieve across different modalities as image and text, but existing methods often face the challenge of insufficient cross-modal training data. In single-modal scenario, similar problem is usually relieved by transferring knowledge from large-scale auxiliary datasets (as ImageNet). Knowledge from such single-modal datasets is also very useful for cross-modal retrieval, which can provide rich general semantic information that can be
doi:10.24963/ijcai.2017/263
dblp:conf/ijcai/HuangPY17
fatcat:c2u5yzhbcrb7nlwuwuhbbgsnwi