Video + CLIP Baseline for Ego4D Long-term Action Anticipation [article]

Srijan Das, Michael S. Ryoo
2022 arXiv   pre-print
In this report, we introduce our adaptation of image-text models for long-term action anticipation. Our Video + CLIP framework makes use of a large-scale pre-trained paired image-text model: CLIP and a video encoder Slowfast network. The CLIP embedding provides fine-grained understanding of objects relevant for an action whereas the slowfast network is responsible for modeling temporal information within a video clip of few frames. We show that the features obtained from both encoders are
more » ... mentary to each other, thus outperforming the baseline on Ego4D for the task of long-term action anticipation. Our code is available at github.com/srijandas07/clip_baseline_LTA_Ego4d.
arXiv:2207.00579v1 fatcat:csnakcptpba35n6oryyho5ffji