Semantic context learning with large-scale weakly-labeled image set

Yao Lu, Wei Zhang, Ke Zhang, Xiangyang Xue
2012 Proceedings of the 21st ACM international conference on Information and knowledge management - CIKM '12  
There are a large number of images available on the web; meanwhile, only a subset of web images can be labeled by professionals because manual annotation is time-consuming and labor-intensive. Although we can now use the collaborative image tagging system, e.g., Flickr, to get a lot of tagged images provided by Internet users, these labels may be incorrect or incomplete. Furthermore, semantics richness requires more than one label to describe one image in real applications, and multiple labels
more » ... sually interact with each other in semantic space. It is of significance to learn semantic context with large-scale weakly-labeled image set in the task of multilabel annotation. In this paper, we develop a novel method to learn semantic context and predict the labels of web images in a semisupervised framework. To address the scalability issue, a small number of exemplar images are first obtained to cover the whole data cloud; then the label vector of each image is estimated as a local combination of the exemplar label vectors. Visual context, semantic context, and neighborhood consistency in both visual and semantic spaces are sufficiently leveraged in the proposed framework. Finally, the semantic context and the label confidence vectors for exemplar images are both learned in an iterative way. Experimental results on the real-world image dataset demonstrate the effectiveness of our method.
doi:10.1145/2396761.2398532 dblp:conf/cikm/LuZZX12 fatcat:vivhicveuvccrdjruj5y3mkrau