A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit <a rel="external noopener" href="http://openaccess.thecvf.com:80/content_CVPR_2019/papers/Chen_Self-Supervised_GANs_via_Auxiliary_Rotation_Loss_CVPR_2019_paper.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
Self-Supervised GANs via Auxiliary Rotation Loss
<span title="">2019</span>
<i title="IEEE">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ilwxppn4d5hizekyd3ndvy2mii" style="color: black;">2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</a>
</i>
Conditional GANs are at the forefront of natural image synthesis. The main drawback of such models is the necessity for labeled data. In this work we exploit two popular unsupervised learning techniques, adversarial training and self-supervision, and take a step towards bridging the gap between conditional and unconditional GANs. In particular, we allow the networks to collaborate on the task of representation learning, while being adversarial with respect to the classic GAN game. The role of
<span class="external-identifiers">
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2019.01243">doi:10.1109/cvpr.2019.01243</a>
<a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/cvpr/ChenZRLH19.html">dblp:conf/cvpr/ChenZRLH19</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/kajs42ghnfg4xkfjhz7hlchiem">fatcat:kajs42ghnfg4xkfjhz7hlchiem</a>
</span>
more »
... lf-supervision is to encourage the discriminator to learn meaningful feature representations which are not forgotten during training. We test empirically both the quality of the learned image representations, and the quality of the synthesized images. Under the same conditions, the self-supervised GAN attains a similar performance to state-of-the-art conditional counterparts. Finally, we show that this approach to fully unsupervised learning can be scaled to attain an FID of 23.4 on unconditional IMAGENET generation. 1 * Work done at Google. 1 Code at https://github.com/google/compare_gan.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190819011006/http://openaccess.thecvf.com:80/content_CVPR_2019/papers/Chen_Self-Supervised_GANs_via_Auxiliary_Rotation_Loss_CVPR_2019_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/0e/96/0e96b0a586e3acfcf1602c0e246c04c6080e2315.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr.2019.01243">
<button class="ui left aligned compact blue labeled icon button serp-button">
<i class="external alternate icon"></i>
ieee.com
</button>
</a>