Content is still king

Ba Quan Truong, Aixin Sun, Sourav S. Bhowmick
2012 Proceedings of the 2nd ACM International Conference on Multimedia Retrieval - ICMR '12  
Tags associated with social images are valuable information source for superior tag-based image retrieval (TagIR) experiences. One of the key issues in TagIR is to learn the effectiveness of a tag in describing the visual content of its annotated image, also known as tag relevance. One of the most effective approaches in the literature for tag relevance learning is neighbor voting. In this approach a tag is considered more relevant to its annotated image (also known as the seed image) if the
more » ... is also used to annotate the neighbor images (nearest neighbors by visual similarity). However, the stateof-the-art approach that realizes the neighbor voting scheme does not explore the possibility of exploiting the content (e.g., degree of visual similarity between the seed and neighbor images) and contextual (e.g., tag association by co-occurrence) features of social images to further boost the accuracy of TagIR. In this paper, we identify and explore the viability of four content and context-based dimensions namely, image similarity, tag matching, tag influence, and refined tag relevance, in the context of tag relevance learning for TagIR. With alternative formulations under each dimension, this paper empirically evaluated 20 neighbor voting schemes with 81 single-tag queries on nus-wide dataset. Despite the potential benefits that the contextual information related to tags bring in to image search, surprisingly, our experimental results reveal that the content-based (image similarity) dimension is still the king as it significantly improves the accuracy of tag relevance learning for TagIR. On the other hand, tag relevance learning does not benefit from the context-based dimensions in the voting schemes.
doi:10.1145/2324796.2324808 dblp:conf/mir/TruongSB12 fatcat:ydlxsnxrdvegzcqafgvcghgery