Introduction to the CLEF2013 Labs and Workshop
Conference and Labs of the Evaluation Forum
The CLEF 2013 conference is a new edition of the popular CLEF campaign and workshop series which has run since 2000 contributing to the systematic evaluation of information access systems, primarily through experimentation on shared tasks. In 2010 CLEF was launched in a new format, as a conference with research presentations, panels, poster and demo sessions and laboratory evaluation workshops. These are proposed and operated by groups of organizers volunteering their time and effort to define,
... promote, administrate and run an evaluation activity. Labs for CLEF 2013, as in the editions from 2010, 2011 and 2012, are of two types: laboratories to conduct evaluation of information access systems, and workshops to discuss and pilot innovative evaluation activities. CLEF 2013 is the densest campaign until now as there were accepted 9 laboratories and one workshop. To identify the best proposals, besides well-established criteria from previous years' editions of CLEF such as topical relevance, novelty, potential impact on future world affairs, likely number of participants, and the quality of the organizing consortium, this year we stressed movement beyond previous years' efforts and connection to real-life usage scenarios. Each Lab, building on previous experience, demonstrated maturity coming with new tasks, new and larger data sets, new ways of evaluation or more languages. They are described by the Lab organizers in details, here we just brief on them. PAN -Uncovering Plagiarism, Authorship, and Author Profiling PAN 2013 addressed issues related to Digital Text Forensics and evaluated the participants' submissions along three tasks: Plagiarism Detection: Given a document, is it an original? Author Identification: Given a document, who wrote it? Author Profiling: Given a document, what are the author's demographics? ImageCLEF 2013 -Cross Language Image Annotation and Retrieval The main goal of ImageCLEF, which started in 2003, is supporting the development of visual media analysis, indexing, classification, and retrieval by building the infrastructure for the evaluation of visual information retrieval systems operating in monolingual, language-independent and multi-modal contexts. The three challenging tasks of the ImageCLEF 2013 were: Photo Annotation and Retrieval: semantic concept detection using private collection data, and large-scale annotation using general Web data; Plant Identification: visual classification of leaf images for the identification of plant species; Robot Vision: semantic spatial understanding for a mobile robot using multimodal data.