Am I Done? Predicting Action Progress in Videos

Federico Becattini, Tiberio Uricchio, Lorenzo Seidenari, Lamberto Ballan, Alberto Del Bimbo
2020 ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)  
In this paper we deal with the problem of predicting action progress in videos. We argue that this is an extremely important task since it can be valuable for a wide range of interaction applications. To this end we introduce a novel approach, named ProgressNet, capable of predicting when an action takes place in a video, where it is located within the frames, and how far it has progressed during its execution. To provide a general definition of action progress, we ground our work in the
more » ... tics literature, borrowing terms and concepts to understand which actions can be the subject of progress estimation. As a result, we define a categorization of actions and their phases. Motivated by the recent success obtained from the interaction of Convolutional and Recurrent Neural Networks, our model is based on a combination of the Faster R-CNN framework, to make frame-wise predictions, and LSTM networks, to estimate action progress through time. After introducing two evaluation protocols for the task at hand, we demonstrate the capability of our model to effectively predict action progress on the UCF-101 and J-HMDB datasets.
doi:10.1145/3402447 fatcat:7bgk4bembzbc7o4ucsm4hl4fae