A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
A Hough transform-based voting framework for action recognition
2010
2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
We present a method to classify and localize human actions in video using a Hough transform voting framework. Random trees are trained to learn a mapping between densely-sampled feature patches and their corresponding votes in a spatio-temporal-action Hough space. The leaves of the trees form a discriminative multi-class codebook that share features between the action classes and vote for action centers in a probabilistic manner. Using low-level features such as gradients and optical flow, we
doi:10.1109/cvpr.2010.5539883
dblp:conf/cvpr/YaoGG10
fatcat:7d6slbftqvhbzafwsykupd2laq