A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Diversified Texture Synthesis with Feed-Forward Networks
2017
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Recent progresses on deep discriminative and generative modeling have shown promising results on texture synthesis. However, existing feed-forward based methods trade off generality for efficiency, which suffer from many issues, such as shortage of generality (i.e., build one network per texture), lack of diversity (i.e., always produce visually identical output) and suboptimality (i.e., generate less satisfying visual effects). In this work, we focus on solving these issues for improved
doi:10.1109/cvpr.2017.36
dblp:conf/cvpr/LiFYWLY17
fatcat:ax4veoxfk5fvfkfozy5545yxl4