A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Two-Stream Convolutional Networks for Dynamic Texture Synthesis
2018
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
We introduce a two-stream model for dynamic texture synthesis. Our model is based on pre-trained convolutional networks (ConvNets) that target two independent tasks: (i) object recognition, and (ii) optical flow prediction. Given an input dynamic texture, statistics of filter responses from the object recognition ConvNet encapsulate the per-frame appearance of the input texture, while statistics of filter responses from the optical flow ConvNet model its dynamics. To generate a novel texture, a
doi:10.1109/cvpr.2018.00701
dblp:conf/cvpr/TesfaldetBD18
fatcat:xltjab4otvhpzf3hxf7fnxsozq