Learning Category-Specific Sharable and Exemplary Visual Elements for Image Classification
This paper presents a novel method to determine the discriminative image representation via visual dictionary learning framework for image classification task. Visual dictionary learning has the capacity to represent input image using an over-complete element set. Sparsity restrains distractors and prevents overfitting. The two main characteristics benefit the classification solution. However, one shortcoming of existing dictionary learning is that it neglects to exploit the potential
... potential correlations across visual elements, especially from the category-specific feature space. To address this problem, we first propose to learn multiple discriminative category-specific dictionaries (DCSD) from all categories. The DCSD can explore the visual elements from each category in terms of sharable property. For this reason, these learned category-specific visual elements encourage image features from the same class to have the similar feature representations. In addition, exemplary data reflect the main characteristic of whole dataset and can improve the performance of algorithm that employs them. Therefore, we further propose a representative pattern dictionary (RPD) model to discover the exemplary visual elements for promoting the discriminative capability of feature representation. These exemplary visual elements are essentially a subset of over-complete visual elements and can represent the whole sample data effectively. Finally, we design a novel strategy that integrates the merits of object proposals and deep features jointly to strengthen the semantic information of image-level feature. Experimental results on benchmark datasets demonstrate the effectiveness of our method, which is shown to be superior to the recently competing dictionary learning and deep learning based image classification approaches. INDEX TERMS Dictionary learning, deep feature, group sparsity.