A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2016; you can also visit the original URL.
The file type is application/pdf
.
Deep Dynamic Neural Networks for Multimodal Gesture Segmentation and Recognition
2016
IEEE Transactions on Pattern Analysis and Machine Intelligence
This paper describes a novel method called Deep Dynamic Neural Networks (DDNN) for multimodal gesture recognition. A semi-supervised hierarchical dynamic framework based on a Hidden Markov Model (HMM) is proposed for simultaneous gesture segmentation and recognition where skeleton joint information, depth and RGB images, are the multimodal input observations. Unlike most traditional approaches that rely on the construction of complex handcrafted features, our approach learns high-level
doi:10.1109/tpami.2016.2537340
pmid:26955020
fatcat:h3bpphgchfeqlartq4ewgbllfq