Online learning and fusion of orientation appearance models for robust rigid object tracking
2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG)
We introduce a robust framework for learning and fusing of orientation appearance models based on both texture and depth information for rigid object tracking. Our framework fuses data obtained from a standard visual camera and dense depth maps obtained by low-cost consumer depths cameras such as the Kinect. To combine these two completely different modalities, we propose to use features that do not depend on the data representation: angles. More specifically, our framework combines image
... nt orientations as extracted from intensity images with the directions of surface normals computed from dense depth fields. We propose to capture the correlations between the obtained orientation appearance models using a fusion approach motivated by the original Active Appearance Models (AAM). To incorporate these features in a learning framework, we use a robust kernel based on the Euler representation of angles which does not require off-line training, thus it can be computationally efficient implemented online. The robustness of learning from orientation appearance models is presented theoretically and experimentally in this work. This kernel enables us to cope with gross measurement errors, missing data as well as typical problems such as illumination changes and occlusions. By combining the proposed models with a particle filter, the proposed framework was used for performing 2D plus 3D rigid object tracking, achieving robust performance in very difficult tracking scenarios including extreme pose variations.