A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit <a rel="external noopener" href="https://arxiv.org/pdf/2009.06678v1.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
Filters
WDRN : A Wavelet Decomposed RelightNet for Image Relighting
[article]
<span title="2020-09-14">2020</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
The task of recalibrating the illumination settings in an image to a target configuration is known as relighting. Relighting techniques have potential applications in digital photography, gaming industry and in augmented reality. In this paper, we address the one-to-one relighting problem where an image at a target illumination settings is predicted given an input image with specific illumination conditions. To this end, we propose a wavelet decomposed RelightNet called WDRN which is a novel
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.06678v1">arXiv:2009.06678v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ete7747wrfdihapemjwonkslsm">fatcat:ete7747wrfdihapemjwonkslsm</a>
</span>
more »
... oder-decoder network employing wavelet based decomposition followed by convolution layers under a muti-resolution framework. We also propose a novel loss function called gray loss that ensures efficient learning of gradient in illumination along different directions of the ground truth image giving rise to visually superior relit images. The proposed solution won the first position in the relighting challenge event in advances in image manipulation (AIM) 2020 workshop which proves its effectiveness measured in terms of a Mean Perceptual Score which in turn is measured using SSIM and a Learned Perceptual Image Patch Similarity score.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201004085002/https://arxiv.org/pdf/2009.06678v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.06678v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
Transform Domain Pyramidal Dilated Convolution Networks For Restoration of Under Display Camera Images
[article]
<span title="2020-09-20">2020</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
Under-display camera (UDC) is a novel technology that can make digital imaging experience in handheld devices seamless by providing large screen-to-body ratio. UDC images are severely degraded owing to their positioning under a display screen. This work addresses the restoration of images degraded as a result of UDC imaging. Two different networks are proposed for the restoration of images taken with two types of UDC technologies. The first method uses a pyramidal dilated convolution within a
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.09393v1">arXiv:2009.09393v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/agwxfj7rgjdjvexb6fbxamonji">fatcat:agwxfj7rgjdjvexb6fbxamonji</a>
</span>
more »
... velet decomposed convolutional neural network for pentile-organic LED (P-OLED) based display system. The second method employs pyramidal dilated convolution within a discrete cosine transform based dual domain network to restore images taken using a transparent-organic LED (T-OLED) based UDC system. The first method produced very good quality restored images and was the winning entry in European Conference on Computer Vision (ECCV) 2020 challenge on image restoration for Under-display Camera - Track 2 - P-OLED evaluated based on PSNR and SSIM. The second method scored fourth position in Track-1 (T-OLED) of the challenge evaluated based on the same metrics.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200928012222/https://arxiv.org/pdf/2009.09393v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.09393v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
AIM 2020 Challenge on Video Extreme Super-Resolution: Methods and Results
[article]
<span title="2020-09-14">2020</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
CET CVLab Title: Video Extreme Super-Resolution using Progressive Wide Activation Net Members: Hrishikesh P S, Densen Puthussery, Jiji C V Affiliations: College of Engineering, Trivandrum, India Fig. 1 ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.06290v1">arXiv:2009.06290v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/bbgfzmwupfgcnigwr2onun4zzm">fatcat:bbgfzmwupfgcnigwr2onun4zzm</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200920185737/https://arxiv.org/pdf/2009.06290v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.06290v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
AIM 2020 Challenge on Rendering Realistic Bokeh
[article]
<span title="2020-11-10">2020</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
Puthussery -puthusserydensen@gmail.com, Jiji C V Affiliations: College of Engineering Trivandrum, India CET SP Title: Bokeh Effect using VGG based Wavelet CNN Members: Hrishikesh P S -hrishikeshps@cet.ac.in ...
Runtime, s PSNR↑ SSIM↑ MOS↑
Airia-bokeh
MingQian
TensorFlow
Nvidia TITAN RTX
1.52
23.58
0.8770
4.2
AIA-Smart
JuewenPeng
PyTorch
GeForce GTX 1080
15.2
22.94
0.8842
4.0
CET CVLab
Densen ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2011.04988v1">arXiv:2011.04988v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/taj54kntdbbmfpekn7cz6eqcqe">fatcat:taj54kntdbbmfpekn7cz6eqcqe</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201112024448/https://arxiv.org/pdf/2011.04988v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/bc/2a/bc2ac0f5fcd16817c1be09a1a2c78793c38c1a42.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2011.04988v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
AIM 2020: Scene Relighting and Illumination Estimation Challenge
[article]
<span title="2020-09-27">2020</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
The loss is computed based on the angle and color temperature predictions, following Eq. (2), and is used to determine the final ranking. 0.3405 (1) 17.0717 (2)
0.03s Tensorflow P100
CET CVLab Densen ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.12798v1">arXiv:2009.12798v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/lkc46rz3trak5d277vzern2saq">fatcat:lkc46rz3trak5d277vzern2saq</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201005123216/https://arxiv.org/pdf/2009.12798v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.12798v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
NTIRE 2020 Challenge on Image Demoireing: Methods and Results
[article]
<span title="2020-05-06">2020</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
This paper reviews the Challenge on Image Demoireing that was part of the New Trends in Image Restoration and Enhancement (NTIRE) workshop, held in conjunction with CVPR 2020. Demoireing is a difficult task of removing moire patterns from an image to reveal an underlying clean image. The challenge was divided into two tracks. Track 1 targeted the single image demoireing problem, which seeks to remove moire patterns from a single image. Track 2 focused on the burst demoireing problem, where a
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.03155v1">arXiv:2005.03155v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7cob4jufzbawfafcimgtoitdk4">fatcat:7cob4jufzbawfafcimgtoitdk4</a>
</span>
more »
... of degraded moire images of the same scene were provided as input, with the goal of producing a single demoired image as output. The methods were ranked in terms of their fidelity, measured using the peak signal-to-noise ratio (PSNR) between the ground truth clean images and the restored images produced by the participants' methods. The tracks had 142 and 99 registered participants, respectively, with a total of 14 and 6 submissions in the final testing stage. The entries span the current state-of-the-art in image and burst image demoireing problems.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200509004024/https://arxiv.org/pdf/2005.03155v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/1c/32/1c320bd3234e7c5eeb1d5240d43a2c5122614c79.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.03155v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
UDC 2020 Challenge on Image Restoration of Under-Display Camera: Methods and Results
[article]
<span title="2020-08-18">2020</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
CET CVLab Members: Densen Puthussery, Hrishikesh P S, Melvin Kuriakose, Jiji C V Affiliations: College of Engineering, Trivandrum, India Track: T-OLED and P-OLED Title: Dual Domain Net (T-OLED) and Wavelet ...
IT: Inference Time Team
Username
PSNR SSIM TT(h) IT (s/frame) CPU/GPU Platform
Ensemble
Loss
CET CVLAB
Densen
32.99(1) 0.9578(1) 72
0.044
Tesla T4 Tensorflow
-
L2
CILab IITM
varun19299
32.29 ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.07742v1">arXiv:2008.07742v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/5ru22zb5e5fcnnl4y3jpmclsyy">fatcat:5ru22zb5e5fcnnl4y3jpmclsyy</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200821164518/https://arxiv.org/pdf/2008.07742v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2008.07742v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
NTIRE 2020 Challenge on Perceptual Extreme Super-Resolution: Methods and Results
[article]
<span title="2020-05-03">2020</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
Puthussery, Jiji C V Affiliation: College of Engineering Trivandrum MSMers
MsSrModel
MoonCloud
SuperT
KU ISPL A ...
Jaihyun Park, Gwantae Kim, Kanghyu Lee Affiliation: Korea University CET CVLab Title: Perceptual Extreme Super resolution Using V-Stacked Relativistic GAN Members: Hrishikesh P S (hrishikeshps@cet.ac.in), Densen ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.01056v1">arXiv:2005.01056v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6nwj5ilbgbgjnmd6oy435hjdhi">fatcat:6nwj5ilbgbgjnmd6oy435hjdhi</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200513082158/https://arxiv.org/pdf/2005.01056v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.01056v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
AIM 2020 Challenge on Efficient Super-Resolution: Methods and Results
[article]
<span title="2020-09-15">2020</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
., Ltd CET CVLab Title: Efficient Single Image Super-resolution using Progressive Wide Activation Net Members: Hrishikesh P S (hrishikeshps94@gmail.com), Densen Puthussery, Jiji C V Affiliation: College ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.06943v1">arXiv:2009.06943v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2s7k5wsgsjgo5flnqaby26cn64">fatcat:2s7k5wsgsjgo5flnqaby26cn64</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200929192316/https://arxiv.org/pdf/2009.06943v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.06943v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>
NTIRE 2020 Challenge on Image and Video Deblurring
[article]
<span title="2020-05-10">2020</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
Simplified SRN Members: Yuichi Ito (wataridori2010@gmail.com) Affiliations: Vermilion CET CVLab Title: V-Stacked Deep CNN for Single Image Deblurring Members: Hrishikesh P S (hrishikeshps@cet.ac.in), Densen ...
Puthussery, Akhil K A, Jiji C V Affiliations: College of Engineering Trivandrum Title: Image Deblurring using Wasserstein Autoencoder Members: Guisik Kim (specialre@naver.com) Affiliations: CVML, Chung-Ang ...
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.01244v2">arXiv:2005.01244v2</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/aoy3tyxlybefrd7yd5ywvr6jh4">fatcat:aoy3tyxlybefrd7yd5ywvr6jh4</a>
</span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200529002740/https://arxiv.org/pdf/2005.01244v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/4a/64/4a64aa20039c7995eb6ff21ae51f92f9e0e2ffe8.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.01244v2" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>