Action2Vec: A Crossmodal Embedding Approach to Action Learning [article]

Meera Hahn, Andrew Silva, James M. Rehg
2019 arXiv   pre-print
We describe a novel cross-modal embedding space for actions, named Action2Vec, which combines linguistic cues from class labels with spatio-temporal features derived from video clips. Our approach uses a hierarchical recurrent network to capture the temporal structure of video features. We train our embedding using a joint loss that combines classification accuracy with similarity to Word2Vec semantics. We evaluate Action2Vec by performing zero shot action recognition and obtain state of the
more » ... results on three standard datasets. In addition, we present two novel analogy tests which quantify the extent to which our joint embedding captures distributional semantics. This is the first joint embedding space to combine verbs and action videos, and the first to be thoroughly evaluated with respect to its distributional semantics.
arXiv:1901.00484v1 fatcat:umxyer4iurgjte2wkvl5egglw4