Overview of the ImageCLEFmed 2006 Medical Retrieval and Medical Annotation Tasks [chapter]

Henning Müller, Thomas Deselaers, Thomas Deserno, Paul Clough, Eugene Kim, William Hersh
2007 Lecture Notes in Computer Science  
This paper describes the medical image retrieval and annotation tasks of ImageCLEF 2006. Both tasks are described with respect to goals, databases, topics, results, and techniques. The ImageCLEFmed retrieval task had 12 participating groups (100 runs). Most runs were automatic, with only a few manual or interactive. Purely textual runs were in the majority compared to purely visual runs but most were mixed, using visual and textual information. None of the manual or interactive techniques were
more » ... ignificantly better than automatic runs. The bestperforming systems used visual and textual techniques combined, but combinations of visual and textual features often did not improve performance. Purely visual systems only performed well on visual topics. The medical automatic annotation used a larger database of 10,000 training images from 116 classes, up from 9,000 images from 57 classes in 2005. Twelve groups submitted 28 runs. Despite the larger number of classes, results were almost as good as in 2005 which demonstrates a clear improvement in performance. The best system of 2005 would have received a position in the middle in 2006. keywords: image retrieval, automatic image annotation, medical information retrieval
doi:10.1007/978-3-540-74999-8_72 fatcat:cdsdiajxufhw7fwg4uelxv524a