The CLEF 2011 Photo Annotation and Concept-based Retrieval Tasks

Stefanie Nowak, Karolin Nagel, Judith Liebetrau
2011 Conference and Labs of the Evaluation Forum  
The ImageCLEF 2011 Photo Annotation and Concept-based Retrieval Tasks pose the challenge of an automated annotation of Flickr images with 99 visual concepts and the retrieval of images based on query topics. The participants were provided with a training set of 8,000 images including annotations, EXIF data, and Flickr user tags. The annotation challenge was performed on 10,000 images, while the retrieval challenge considered 200,000 images. Both tasks differentiate among approaches that
more » ... solely visual information, approaches that rely only on textual information in form of image metadata and user tags, and multi-modal approaches that combine both information sources. The relevance assessments were acquired with a crowdsourcing approach and the evaluation followed two evaluation paradigms: per concept and per example. In total, 18 research teams participated in the annotation challenge with 79 submissions. The concept-based retrieval task was tackled by 4 teams that submitted a total of 31 runs. Summarizing the results, the annotation task could be solved with a MiAP of 0.443 in the multimodal configuration, with a MiAP of 0.388 in the visual configuration, and with a MiAP of 0.346 in the textual configuration. The conceptbased retrieval task was solved best with a MAP of 0.164 using multimodal information and a manual intervention in the query formulation. The best completely automated approach achieved 0.085 MAP and uses solely textual information. Results indicate that while the annotation task shows promising results, the concept-based retrieval task is much harder to solve, especially for specific information needs.
dblp:conf/clef/NowakNL11 fatcat:kfjwv7jqwvbahcftxijsfsmgi4