Actor and Observer: Joint Modeling of First and Third-Person Videos
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
gsig/actor-observer Third Person First Person ? "Cleaning Dishes" Learned Joint Representation Transfer "Person is typing on a laptop. Then they put down the laptop and pick up a pillow." Figure 1 : We explore how to reason jointly about first and third-person for understanding human actions. We collect paired data of first and third-person actions sharing the same script. Our model learns a representation from the relationship between these two modalities. We demonstrate multiple applications
... f this research direction, for example, transferring knowledge from the observer's to the actor's perspective. Abstract Several theories in cognitive neuroscience suggest that when people interact with the world, or simulate interactions, they do so from a first-person egocentric perspective, and seamlessly transfer knowledge between third-person (observer) and first-person (actor). Despite this, learning such models for human action recognition has not been achievable due to the lack of data. This paper takes a step in this direction, with the introduction of Charades-Ego, a large-scale dataset of paired first-person and third-person videos, involving 112 people, with 4000 paired videos. This enables learning the link between the two, actor and observer perspectives. Thereby, we address one of the biggest bottlenecks facing egocentric vision research, providing a link from first-person to the abundant third-person data on the web. We use this data to learn a joint representation of first and third-person videos, with only weak supervision, and show its effectiveness for transferring knowledge from the third-person to the first-person domain.