A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Let's See Clearly: Contaminant Artifact Removal for Moving Cameras
[article]
2021
arXiv
pre-print
In this paper, we propose a video restoration method to automatically remove these contaminants and produce a clean video. ...
The entire network is trained on a synthetic dataset that approximates the physical lighting properties of contaminant artifacts. ...
Figure 2 illustrates the proposed two-stage recurrent framework. ...
arXiv:2104.08852v1
fatcat:lrourka56zehblzb7ng34li73q
PP-MSVSR: Multi-Stage Video Super-Resolution
[article]
2021
arXiv
pre-print
Different from the Single Image Super-Resolution(SISR) task, the key for Video Super-Resolution(VSR) task is to make full use of complementary information across frames to reconstruct the high-resolution ...
in stage-3 to make full use of the feature information of the previous stage. ...
Recurrent framework, in contrast to
the restoration of each video frame as a separate task for the sliding-window framework, propagates the underlying in-
full video sequence. ...
arXiv:2112.02828v1
fatcat:ex5c47uufjbzjjemcxpfxvbqra
A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift
[article]
2022
arXiv
pre-print
To address these challenges, we design a differentiable two-stage alignment scheme sequentially in patch and pixel level for effective JDD-B. ...
The two stages are jointly trained in an end-to-end manner. Extensive experiments demonstrate the significant improvement of our method over existing JDD-B methods. ...
Conclusion We presented a differentiable two-stage alignment method for high performance burst image restoration. ...
arXiv:2203.09294v1
fatcat:vb2o677qk5aubbvzbutus6tpqi
Learning Scalable Dictionaries With Application To Scalable Compressive Sensing
2012
Zenodo
Publication in the conference proceedings of EUSIPCO, Bucharest, Romania, 2012 ...
As a basis for our technique we take the regular K-SVD algorithm [4] and build upon it by alternating one of its two main iterative stages i.e. dictionary update. ...
Fig. 3 shows reconstruction via the proposed adaptive scalable CS averaged over frames, given the single trained dictionary D sc for two video test sequences. ...
doi:10.5281/zenodo.52032
fatcat:tdzkkx2e3rf35fy6eyfstthk5y
AIM 2020 Challenge on Video Extreme Super-Resolution: Methods and Results
[article]
2020
arXiv
pre-print
Track 1 is set up to gauge the state-of-the-art for such a demanding task, where fidelity to the ground truth is measured by PSNR and SSIM. ...
Missing information can be restored well in this region, especially in HR videos, where the high-frequency content mostly consists of texture details. ...
The team divided the network training into two stages for both track 1 and track 2. For stage 1, the L1 loss was used. ...
arXiv:2009.06290v1
fatcat:bbgfzmwupfgcnigwr2onun4zzm
Editorial: Introduction to the Issue on Deep Learning for Image/Video Restoration and Compression
2021
IEEE Journal on Selected Topics in Signal Processing
This special issue covers the state of the art in learned image/video restoration and compression to promote further progress in innovative architectures and training methods for effective and efficient ...
In "Color image restoration exploiting inter-channel correlation with a 3-stage CNN" Cui et al. propose a 3-stage CNN for color image restoration tasks. ...
doi:10.1109/jstsp.2021.3053364
fatcat:hjo5pvw6lvgpfga2wfq4vpaq3q
EDVR: Video Restoration with Enhanced Deformable Convolutional Networks
[article]
2019
arXiv
pre-print
In this work, we propose a novel Video Restoration framework with Enhanced Deformable networks, termed EDVR, to address these challenges. ...
Second, we propose a Temporal and Spatial Attention (TSA) fusion module, in which attention is applied both temporally and spatially, so as to emphasize important features for subsequent restoration. ...
We thank Yapeng Tian for providing the core codes of TDAN [40] . ...
arXiv:1905.02716v1
fatcat:6bl4zdhvxfhldmfc76ie4gbkoe
Neural Compression-Based Feature Learning for Video Restoration
[article]
2022
arXiv
pre-print
Therefore, we design a neural compression module to filter the noise and keep the most useful information in features for video restoration. ...
How to efficiently utilize the temporal features is crucial, yet challenging, for video restoration. ...
Proposed Method
Framework Overview We design a neural compression-based framework for video restoration. ...
arXiv:2203.09208v2
fatcat:672qyeakyjcepg6tq3cmuvv6ce
NTIRE 2020 Challenge on Video Quality Mapping: Methods and Results
[article]
2020
arXiv
pre-print
The challenge includes both a supervised track (track 1) and a weakly-supervised track (track 2) for two benchmark datasets. ...
In particular, track 1 offers a new Internet video benchmark, requiring algorithms to learn the map from more compressed videos to less compressed videos in a supervised training manner. ...
For progressive training, first, the PCD align module and the 1st Restoration module are trained together. ...
arXiv:2005.02291v3
fatcat:z5zgwpnyrveothp337xeb7yfoy
NTIRE 2022 Challenge on Super-Resolution and Quality Enhancement of Compressed Video: Dataset, Methods and Results
[article]
2022
arXiv
pre-print
Track 1 aims at enhancing the videos compressed by HEVC at a fixed QP. Track 2 and Track 3 target both the super-resolution and quality enhancement of HEVC compressed video. ...
The proposed methods and solutions gauge the state-of-the-art of super-resolution and quality enhancement of compressed video. ...
Shiqi Wang from the City University of Hong Kong for providing the results of their method [13] on the validation and test sets. ...
arXiv:2204.09314v2
fatcat:br5dapahr5cyrjowcfjwlkkdnm
Real-time Mask Identification for COVID-19: An Edge Computing-based Deep Learning Framework
2021
IEEE Internet of Things Journal
Our ECMask consists of three main stages: video restoration, face detection, and mask identification. ...
We construct extensive experiments to validate the good performance based on real video data, in consideration of detection accuracy and execution time efficiency of the whole video analysis, which have ...
Therefore, we employ Video Restoration framework with Enhanced Deformable convolutions (EDVR) as the first stage of ECMask due to its good performance in video restoration. ...
doi:10.1109/jiot.2021.3051844
fatcat:hmjcbnhjwzhmhdy6vivpkrrxji
Unfolding Taylor's Approximations for Image Restoration
[article]
2021
arXiv
pre-print
Specifically, our framework consists of two steps, correspondingly responsible for the mapping and derivative functions. ...
To solve the above problems, inspired by Taylor's Approximations, we unfold Taylor's Formula to construct a novel framework for image restoration. ...
The parameters of Derivative function part G are shared across the progressive stages. The whole framework is trained in an end-to-end manner. ...
arXiv:2109.03442v1
fatcat:vircpwelhbck3jwgr7aqn7bzbe
Dataset and Network Structure: Towards Frames Selection for Fast Video Deblurring
2021
IEEE Access
The training stage took 3 days for training
DeblurNet system with both of its sub-modules.
E. ...
This work comes with the following main contributions: 1) We introduce DeblurNet, as two stages training-based deep learning model for a fast and robust frame selective video deblurring. ...
doi:10.1109/access.2021.3074199
fatcat:jcqk6tkydnbkhcn6opgnpkjzja
A deep learning framework for quality assessment and restoration in video endoscopy
[article]
2019
arXiv
pre-print
We propose a fully automatic framework that can: 1) detect and classify six different primary artifacts, 2) provide a quality score for each frame and 3) restore mildly corrupted frames. ...
To detect different artifacts our framework exploits fast multi-scale, single stage convolutional neural network detector. ...
Compared to two-stage detectors, single-stage detectors mainly suffer two issues: high false detection due to 1) presence of varied size objects and 2) high initial number of anchor boxes requirement that ...
arXiv:1904.07073v1
fatcat:aixdba6zazdzzjqebwbeiu7snm
Improving Action Localization by Progressive Cross-Stream Cooperation
2019
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
To improve action localization results at the video level, we additionally propose a new strategy to train class-specific actionness detectors for better temporal segmentation, which can be readily learnt ...
As a result, our iterative framework progressively improves action localization results at the frame level. ...
Action Detection Model Building upon the two-stream framework [17] , we propose a Progressive Cross-stream Cooperation (PCSC) model for action detection at the frame level. ...
doi:10.1109/cvpr.2019.01229
dblp:conf/cvpr/SuOZX19
fatcat:dsjzouvjynhs7ccw7bvshwhuc4
« Previous
Showing results 1 — 15 out of 25,724 results