Viewpoint invariant semantic object and scene categorization with RGB-D sensors

Hasan F. M. Zaki, Faisal Shafait, Ajmal Mian
2018 Autonomous Robots  
Understanding the semantics of objects and scenes using multi-modal RGB-D sensors serves many robotics applications. Key challenges for accurate RGB-D image recognition are the scarcity of training data, variations due to viewpoint changes and the heterogeneous nature of the data. We address these problems and propose a generic deep learning framework based on a pre-trained convolutional neural network (CNN), as a feature extractor for both the colour and depth channels. We propose a rich
more » ... scale feature representation, referred to as Convolutional Hypercube Pyramid (HP-CNN), that is able to encode discriminative information from the convolutional tensors at different levels of detail. We also present a technique to fuse the proposed HP-CNN with the activations of fully connected neurons based on an Extreme Learning Machine classifier in a late fusion scheme which leads to a highly discriminative and compact representation. To further improve performance, we devise HP-CNN-T which is a view-invariant descriptor extracted from a Multi-view 3D Object Pose (M3DOP) model. M3DOP is learned from over 140,000 RGB-D images that are synthetically generated by rendering CAD models from different viewpoints. Extensive evaluations on four RGB-D object and scene recognition datasets demonstrate that our HP-CNN and HP-CNN-T consistently outperforms state-of-the-art methods for several recognition tasks by a significant margin.
doi:10.1007/s10514-018-9776-8 fatcat:spg6rdzgbrhpxpbuedqr2th2pe