A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Recognizing American Sign Language Gestures from Within Continuous Videos
2018
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
In this paper, we propose a novel hybrid model, 3D recurrent convolutional neural networks (3DRCNN), to recognize American Sign Language (ASL) gestures and localize their temporal boundaries within continuous videos, by fusing multi-modality features. Our proposed 3DR-CNN model integrates 3D convolutional neural network (3DCNN) and enhanced fully connected recurrent neural network (FC-RNN), where 3DCNN learns multi-modality features from RGB, motion, and depth channels, and FC-RNN captures the
doi:10.1109/cvprw.2018.00280
dblp:conf/cvpr/YeTHL18
fatcat:hekcwsatnjc2fi6cllfgz5i54y