A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2018; you can also visit the original URL.
The file type is
Lecture Notes in Computer Science
This paper presents a novel method to address the problem of indexing a large set of images taking advantage of associated multimodal content such as text or tags. The method finds relationships between the visual and text modalities enriching the image content representation to improve the performance of content-based image search. This method finds a mapping that connects visual and text information that allows to project new (annotated and unannotated) images to the space defined by semanticdoi:10.1007/978-3-642-41822-8_46 fatcat:cnsejphfuzgmtmmpnejkjwo5ya