A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Online learning and fusion of orientation appearance models for robust rigid object tracking
2013
2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG)
We introduce a robust framework for learning and fusing of orientation appearance models based on both texture and depth information for rigid object tracking. Our framework fuses data obtained from a standard visual camera and dense depth maps obtained by low-cost consumer depths cameras such as the Kinect. To combine these two completely different modalities, we propose to use features that do not depend on the data representation: angles. More specifically, our framework combines image
doi:10.1109/fg.2013.6553798
dblp:conf/fgr/MarrasATZP13
fatcat:eqckpkxqyvfu5l7qfeqjuh4fbu