Jerk-Aware Video Acceleration Magnification

Shoichiro Takeda, Kazuki Okami, Dan Mikami, Megumi Isogai, Hideaki Kimata
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
Figure 1 : Sports use-case: visualizing the impact spread in the iron shaft. The yellow arrow depicts the golf swing along a trajectory. The top row shows 2 frames overlaid to indicate the swing phase and the impact phase of the ball. The bottom row shows the spatiotemporal slices along a single diagonal red line in the top of row of (a), and the green and cyan circles in them respectively indicate the swing phase and impact phase. (a) Original video. (b) Phase-based motion magnification [25].
more » ... c) Video acceleration magnification [28]. (d) Our proposed jerk-aware video acceleration magnification. Our method only magnifies subtle deformation of the iron shaft without artifacts caused by quick swinging motions in other methods. See the supplementary material for the video results. Abstract Video magnification reveals subtle changes invisible to the naked eye, but such tiny yet meaningful changes are often hidden under large motions: small deformation of the muscles in doing sports, or tiny vibrations of strings in ukulele playing. For magnifying subtle changes under large motions, video acceleration magnification method has recently been proposed. This method magnifies subtle acceleration changes and ignores slow large motions. However, quick large motions severely distort this method. In this paper, we present a novel use of jerk to make the acceleration method robust to quick large motions. Jerk has been used to assess smoothness of time series data in the neuroscience and mechanical engineering fields. On the basis of our observation that subtle changes are smoother than quick large motions at temporal scale, we used jerk-based smoothness to design a jerk-aware filter that passes subtle changes only under quick large motions. By applying our filter to the acceleration method, we obtain impressive magnification results better than those obtained with state-of-the-art.
doi:10.1109/cvpr.2018.00190 dblp:conf/cvpr/TakedaOMIK18 fatcat:arurmns7bvevpbhl5l2injv3hm