A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit <a rel="external noopener" href="https://arxiv.org/pdf/2103.14017v2.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
Scaling-up Disentanglement for Image Translation
[article]
<span title="2021-09-08">2021</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
Image translation methods typically aim to manipulate a set of labeled attributes (given as supervision at training time e.g. domain label) while leaving the unlabeled attributes intact. Current methods achieve either: (i) disentanglement, which exhibits low visual fidelity and can only be satisfied where the attributes are perfectly uncorrelated. (ii) visually-plausible translations, which are clearly not disentangled. In this work, we propose OverLORD, a single framework for disentangling
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.14017v2">arXiv:2103.14017v2</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/r3sinwhijbadvamkpvq7rndgay">fatcat:r3sinwhijbadvamkpvq7rndgay</a>
</span>
more »
... led and unlabeled attributes as well as synthesizing high-fidelity images, which is composed of two stages; (i) Disentanglement: Learning disentangled representations with latent optimization. Differently from previous approaches, we do not rely on adversarial training or any architectural biases. (ii) Synthesis: Training feed-forward encoders for inferring the learned attributes and tuning the generator in an adversarial manner to increase the perceptual quality. When the labeled and unlabeled attributes are correlated, we model an additional representation that accounts for the correlated attributes and improves disentanglement. We highlight that our flexible framework covers multiple settings as disentangling labeled attributes, pose and appearance, localized concepts, and shape and texture. We present significantly better disentanglement with higher translation quality and greater output diversity than state-of-the-art methods.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210914155541/https://arxiv.org/pdf/2103.14017v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/9d/79/9d795f3f39b77beda6081dd12fc2fdf8b02f4cbc.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.14017v2" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>