A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Local descriptions for human action recognition from 3D reconstruction data
2014
2014 IEEE International Conference on Image Processing (ICIP)
In this paper, a view-invariant approach to human action recognition using 3D reconstruction data is proposed. Initially, a set of calibrated Kinect sensors are employed for producing a 3D reconstruction of the performing subjects. Subsequently, a 3D flow field is estimated for every captured frame. For performing action recognition, the 'Bag-of-Words' methodology is followed, where Spatio-Temporal Interest Points (STIPs) are detected in the 4D space (xyzcoordinates plus time). A novel
doi:10.1109/icip.2014.7025569
dblp:conf/icip/PapadopoulosD14
fatcat:5ltg2l7aonanzhaid6h3tmpjba