A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Statistical Gesture Models for 3D Motion Capture from a Library of Gestures with Variants
[chapter]
2010
Lecture Notes in Computer Science
A challenge for 3D motion capture by monocular vision is 3D-2D projection ambiguities that may bring incorrect poses during tracking. In this paper, we propose improving 3D motion capture by learning human gesture models from a library of gestures with variants. This library has been created with virtual human animations. Gestures are described as Gaussian Process Dynamic Models (GPDM) and are used as constraints for motion tracking. Given the raw input poses from the tracker, the gesture model
doi:10.1007/978-3-642-12553-9_19
fatcat:tz2pwkip25hk3o6ao4bseinwni