A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2015; you can also visit the original URL.
The file type is application/pdf
.
Extracting Latent Attributes from Video Scenes Using Text as Background Knowledge
2014
Proceedings of the Third Joint Conference on Lexical and Computational Semantics (*SEM 2014)
We explore the novel task of identifying latent attributes in video scenes, such as the mental states of actors, using only large text collections as background knowledge and minimal information about the videos, such as activity and actor types. We formalize the task and a measure of merit that accounts for the semantic relatedness of mental state terms. We develop and test several largely unsupervised information extraction models that identify the mental states of human participants in video
doi:10.3115/v1/s14-1016
dblp:conf/starsem/TranSC14
fatcat:cwwo6ocipjgn3edrcmq3bqniui