A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Learning Features by Watching Objects Move
2017
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as 'pseudo ground truth' to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in
doi:10.1109/cvpr.2017.638
dblp:conf/cvpr/PathakGDDH17
fatcat:teizzuwtkzbhrfiwm2zh4xoqde