A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit <a rel="external noopener" href="https://arxiv.org/pdf/1906.06972v2.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
EnlightenGAN: Deep Light Enhancement without Paired Supervision
[article]
<span title="2021-01-24">2021</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
Deep learning-based methods have achieved remarkable success in image restoration and enhancement, but are they still competitive when there is a lack of paired training data? As one such example, this paper explores the low-light image enhancement problem, where in practice it is extremely challenging to simultaneously take a low-light and a normal-light photo of the same visual scene. We propose a highly effective unsupervised generative adversarial network, dubbed EnlightenGAN, that can be
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1906.06972v2">arXiv:1906.06972v2</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/sq3oxeq6pzdjdjmlamszanym54">fatcat:sq3oxeq6pzdjdjmlamszanym54</a>
</span>
more »
... ained without low/normal-light image pairs, yet proves to generalize very well on various real-world test images. Instead of supervising the learning using ground truth data, we propose to regularize the unpaired training using the information extracted from the input itself, and benchmark a series of innovations for the low-light image enhancement problem, including a global-local discriminator structure, a self-regularized perceptual loss fusion, and attention mechanism. Through extensive experiments, our proposed approach outperforms recent methods under a variety of metrics in terms of visual quality and subjective user study. Thanks to the great flexibility brought by unpaired training, EnlightenGAN is demonstrated to be easily adaptable to enhancing real-world images from various domains. The code is available at
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210130010107/https://arxiv.org/pdf/1906.06972v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/2e/2a/2e2a89a20d062b138ffb39c4a47a15f517f83496.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1906.06972v2" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>