Viewpoint Invariant Action Recognition using RGB-D Videos [article]

Jian Liu, Naveed Akhtar, Ajmal Mian
2018 arXiv   pre-print
In video-based action recognition, viewpoint variations often pose major challenges because the same actions can appear different from different views. We use the complementary RGB and Depth information from the RGB-D cameras to address this problem. The proposed technique capitalizes on the spatio-temporal information available in the two data streams to the extract action features that are largely insensitive to the viewpoint variations. We use the RGB data to compute dense trajectories that
more » ... re translated to viewpoint insensitive deep features under a non-linear knowledge transfer model. Similarly, the Depth stream is used to extract CNN-based view invariant features on which Fourier Temporal Pyramid is computed to incorporate the temporal information. The heterogeneous features from the two streams are combined and used as a dictionary to predict the label of the test samples. To that end, we propose a sparse-dense collaborative representation classification scheme that strikes a balance between the discriminative abilities of the dense and the sparse representations of the samples over the extracted heterogeneous dictionary.
arXiv:1709.05087v2 fatcat:pok3b2pmubeh3d2rdl3tfwn2lm