A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2018; you can also visit the original URL.
The file type is application/pdf
.
A Graphical Model for unifying tracking and classification within a multimodal Human-Robot Interaction scenario
2010
2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops
This paper introduces our research platform for enabling a multimodal Human-Robot Interaction scenario as well as our research vision: approaching problems in a holistic way to realize this scenario. However, in this paper the main focus is laid on the image processing domain, where our vision has been realized by combining particle tracking and Dynamic Bayesian Network classification in a unified Graphical Model. This combination allows for enhancing the tracking process by an adaptive motion
doi:10.1109/cvprw.2010.5543751
dblp:conf/cvpr/RehrlGTBAWRMR10
fatcat:xyuqfibvx5ag3k5mhnkqpcltqq