Multi-scale Recognition with DAG-CNNs

Songfan Yang, Deva Ramanan
2015 2015 IEEE International Conference on Computer Vision (ICCV)  
We explore multi-scale convolutional neural nets (CNNs) for image classification. Contemporary approaches extract features from a single output layer. By extracting features from multiple layers, one can simultaneously reason about high, mid, and low-level features during classification. The resulting multi-scale architecture can itself be seen as a feed-forward model that is structured as a directed acyclic graph (DAG-CNNs). We use DAG-CNNs to learn a set of multiscale features that can be
more » ... res that can be effectively shared between coarse and fine-grained classification tasks. While finetuning such models helps performance, we show that even "off-the-self" multiscale features perform quite well. We present extensive analysis and demonstrate state-of-the-art classification performance on three standard scene benchmarks (SUN397, MIT67, and Scene15). In terms of the heavily benchmarked MIT67 and Scene15 datasets, our results reduce the lowest previously-reported error by 23.9% and 9.5%, respectively. arXiv:1505.05232v1 [cs.CV] 20 May 2015 (a) mid-level feature is preferred (b) high-level feature is preferred
doi:10.1109/iccv.2015.144 dblp:conf/iccv/YangR15 fatcat:yixsrpsdtfhglbv5p3ehw7lthq