Filters








28 Hits in 1.1 sec

AIM 2020 Challenge on Video Temporal Super-Resolution [article]

Sanghyun Son, Jaerin Lee, Seungjun Nah, Radu Timofte, Kyoung Mu Lee
2020 arXiv   pre-print
Videos in the real-world contain various dynamics and motions that may look unnaturally discontinuous in time when the recordedframe rate is low. This paper reports the second AIM challenge on Video Temporal Super-Resolution (VTSR), a.k.a. frame interpolation, with a focus on the proposed solutions, results, and analysis. From low-frame-rate (15 fps) videos, the challenge participants are required to submit higher-frame-rate (30 and 60 fps) sequences by estimating temporally intermediate
more » ... To simulate realistic and challenging dynamics in the real-world, we employ the REDS_VTSR dataset derived from diverse videos captured in a hand-held camera for training and evaluation purposes. There have been 68 registered participants in the competition, and 5 teams (one withdrawn) have competed in the final testing phase. The winning team proposes the enhanced quadratic video interpolation method and achieves state-of-the-art on the VTSR task.
arXiv:2009.12987v1 fatcat:23cngd3i25bupb5lwszpzxthte

Dynamic Scene Deblurring using a Locally Adaptive Linear Blur Model [article]

Tae Hyun Kim, Seungjun Nah, Kyoung Mu Lee
2016 arXiv   pre-print
Seungjun  ...  Nah received the BS degree in Electrical and Computer Engineering from Seoul National University (SNU), Seoul, Korea in 2014.  ... 
arXiv:1603.04265v1 fatcat:5l5lmfezi5e35ldyhdyrr3aufm

Enhanced Deep Residual Networks for Single Image Super-Resolution [article]

Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, Kyoung Mu Lee
2017 arXiv   pre-print
We remove the batch normalization layers from our network as Nah et al. [19] presented in their image deblurring work.  ... 
arXiv:1707.02921v1 fatcat:64liy73w75fwfex5t24b5uwslm

AIM 2019 Challenge on Video Temporal Super-Resolution: Methods and Results [article]

Seungjun Nah, Sanghyun Son, Radu Timofte, Kyoung Mu Lee
2020 arXiv   pre-print
Nah (seungjun.nah@gmail.com, Seoul National University), S. Son, R. Timofte, K. M. Lee are the AIM 2019 challenge organizers, while the other authors participated in the challenge.  ... 
arXiv:2005.01233v1 fatcat:aqxpmzat7jcwdiztapx5sttceq

Clustering Convolutional Kernels to Compress Deep Neural Networks [chapter]

Sanghyun Son, Seungjun Nah, Kyoung Mu Lee
2018 Lecture Notes in Computer Science  
In this paper, we propose a novel method to compress CNNs by reconstructing the network from a small set of spatial convolution kernels. Starting from a pre-trained model, we extract representative 2D kernel centroids using k-means clustering. Each centroid replaces the corresponding kernels of the same cluster, and we use indexed representations instead of saving whole kernels. Kernels in the same cluster share their weights, and we fine-tune the model while keeping the compressed state.
more » ... rmore, we also suggest an efficient way of removing redundant calculations in the compressed convolutional layers. We experimentally show that our technique works well without harming the accuracy of widely-used CNNs. Also, our ResNet-18 even outperforms its uncompressed counterpart at ILSVRC2012 classification task with over 10x compression ratio.
doi:10.1007/978-3-030-01237-3_14 fatcat:7ky4namhfrerffnwfurl357wqa

NTIRE 2020 Challenge on Image and Video Deblurring [article]

Seungjun Nah, Sanghyun Son, Radu Timofte, Kyoung Mu Lee
2020 arXiv   pre-print
Title:NTIRE 2020 Challenge on Image and Video Deblurring Members: Seungjun Nah 1 (seungjun.nah@gmail.com), Sanghyun Son 1 , Radu Timofte 2 , Kyoung Mu Lee1 Affiliations: 1 Department of ECE, ASRI, SNU,  ...  For track 1, we present the result of Nah et al. [50] that is trained with the REDS dataset. L1 loss is used to train the model for 200 epochs with batch size 16.  ... 
arXiv:2005.01244v2 fatcat:aoy3tyxlybefrd7yd5ywvr6jh4

Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring [article]

Seungjun Nah, Tae Hyun Kim, Kyoung Mu Lee
2018 arXiv   pre-print
Non-uniform blind deblurring for general dynamic scenes is a challenging computer vision problem as blurs arise not only from multiple object motions but also from camera shake, scene depth variation. To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear. Moreover, recent machine learning based methods also depend on synthetic blur datasets generated under these
more » ... This makes conventional deblurring methods fail to remove blurs where blur kernel is difficult to approximate or parameterize (e.g. object motion boundaries). In this work, we propose a multi-scale convolutional neural network that restores sharp images in an end-to-end manner where blur is caused by various sources. Together, we present multi-scale loss function that mimics conventional coarse-to-fine approaches. Furthermore, we propose a new large-scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera. With the proposed model trained on this dataset, we demonstrate empirically that our method achieves the state-of-the-art performance in dynamic scene deblurring not only qualitatively, but also quantitatively.
arXiv:1612.02177v2 fatcat:7z735fss4nd5to2wwf7hahzfem

Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring

Seungjun Nah, Tae Hyun Kim, Kyoung Mu Lee
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
Non-uniform blind deblurring for general dynamic scenes is a challenging computer vision problem as blurs arise not only from multiple object motions but also from camera shake, scene depth variation. To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear. Moreover, recent machine learning based methods also depend on synthetic blur datasets generated under these
more » ... This makes conventional deblurring methods fail to remove blurs where blur kernel is difficult to approximate or parameterize (e.g. object motion boundaries). In this work, we propose a multi-scale convolutional neural network that restores sharp images in an end-to-end manner where blur is caused by various sources. Together, we present multiscale loss function that mimics conventional coarse-to-fine approaches. Furthermore, we propose a new large-scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera. With the proposed model trained on this dataset, we demonstrate empirically that our method achieves the state-of-the-art performance in dynamic scene deblurring not only qualitatively, but also quantitatively.
doi:10.1109/cvpr.2017.35 dblp:conf/cvpr/NahKL17 fatcat:x45ep4255vdfnafqrugmtyahyi

Enhanced Deep Residual Networks for Single Image Super-Resolution

Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, Kyoung Mu Lee
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
We remove the batch normalization layers from our network as Nah et al. [19] presented in their image deblurring work.  ... 
doi:10.1109/cvprw.2017.151 dblp:conf/cvpr/LimSKNL17 fatcat:qrrmnvwbhjfjnesmmsgxfcjhke

Dynamic Video Deblurring Using a Locally Adaptive Blur Model

Tae Hyun Kim, Seungjun Nah, Kyoung Mu Lee
2018 IEEE Transactions on Pattern Analysis and Machine Intelligence  
Seungjun Nah received the BS degree in Electrical and Computer Engineering from Seoul National University (SNU), Seoul, Korea in 2014.  ... 
doi:10.1109/tpami.2017.2761348 pmid:29028187 fatcat:2k26ajqtojh2bopd3ulcudnthu

NTIRE 2019 Challenge on Video Super-Resolution: Methods and Results

Seungjun Nah, Radu Timofte, Shuhang Gu, Sungyong Baik, Seokil Hong, Gyeongsik Moon, Sanghyun Son, Kyoung Mu Lee, Xintao Wang, Kelvin C.K. Chan, Ke Yu, Chao Dong (+48 others)
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
Nah (seungjun.nah@gmail.com, Seoul National University), R. Timofte problem because for each low resolution (LR) frame, the space of corresponding high resolution (HR) frames can be very large.  ... 
doi:10.1109/cvprw.2019.00250 dblp:conf/cvpr/NahTGBHMSL19 fatcat:ki2tjqevi5fetinml3tynd7u3m

NTIRE 2017 Challenge on Single Image Super-Resolution: Methods and Results

Radu Timofte, Eirikur Agustsson, Luc Van Gool, Ming-Hsuan Yang, Lei Zhang, Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, Kyoung Mu Lee, Xintao Wang, Yapeng Tian (+65 others)
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results. A new DIVerse 2K resolution image dataset (DIV2K) was employed. The challenge had 6 competitions divided into 2 tracks with 3 magnification factors each. Track 1 employed the standard bicubic downscaling setup, while Track 2 had unknown downscaling operators (blur kernel and decimation) but learnable through low and high
more » ... s train images. Each competition had ∼ 100 registered participants and 20 teams competed in the final testing phase. They gauge the state-of-the-art in single image super-resolution.
doi:10.1109/cvprw.2017.149 dblp:conf/cvpr/TimofteAG0ZLSKN17 fatcat:myclcf7hzve2zetlutq64pqeyu

Super-resolution data assimilation (SRDA)

Sébastien Barthélémy
2021 Zenodo  
Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep residual networks for single image super-resolution.  ... 
doi:10.5281/zenodo.5522378 fatcat:vukbrzppxzfddpudvzaiogv2sq

Adaptive Single Image Deblurring [article]

Maitreya Suin, Kuldeep Purohit, A. N. Rajagopalan
2022 arXiv   pre-print
Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee. Deep multi-scale convolutional neural network for dynamic scene deblurring. In CVPR, volume 1, page 3, 2017.  ...  et al. [2017], which train on 2103 images from the GoPro dataset (Nah et al. [2017]).  ... 
arXiv:2201.00155v1 fatcat:jexcg67nqfgefi63zja27aj2xe

MAANet: Multi-view Aware Attention Networks for Image Super-Resolution [article]

Jingcai Guo, Shiheng Ma, Song Guo
2019 arXiv   pre-print
Abc-cnn: An attention based convolutional neural [23] Seungjun Nah, Tae Hyun Kim, and Kyoung Mu Lee. 2017.  ...  In Proceedings of the [20] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung International Conference on Learning Representations (ICLR). Mu Lee. 2017.  ... 
arXiv:1904.06252v1 fatcat:2q2b2xr7czgprmddh5u5salkte
« Previous Showing results 1 — 15 out of 28 results