Harvesting Image Databases from the Web
IEEE Transactions on Pattern Analysis and Machine Intelligence
The objective of this work 1 is to automatically generate a large number of images for a specified object class (for example, penguin). A multi-modal approach employing both text, meta data and visual features is used to gather many, high-quality images from the web. Candidate images are obtained by a text based web search querying on the object identifier (the word penguin). The web pages and the images they contain are downloaded. The task is then to remove irrelevant images and re-rank the
... mainder. First, the images are re-ranked using a Bayes posterior estimator trained on the text surrounding the image and meta data features (such as the image alternative tag, image title tag, and image filename). No visual information is used at this stage. Second, the top-ranked images are used as (noisy) training data and a SVM visual classifier is learnt to improve the ranking further. The principal novelty is in combining text/meta-data and visual features in order to achieve a completely automatic ranking of the images. Examples are given for a selection of animals (e.g. camels, sharks, penguins), vehicles (cars, airplanes, bikes) and other classes (guitar, wristwatch), totalling 18 classes. The results are assessed by precision/recall curves on ground truth annotated data and by comparison to previous approaches including those of Berg et al.  (on an additional six classes) and Fergus et al. .