A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Low-Dose CT Denoising Using a Structure-Preserving Kernel Prediction Network
[article]
2021
arXiv
pre-print
To address this issue, we propose a Structure-preserving Kernel Prediction Network (StructKPN) that combines the kernel prediction network with a structure-aware loss function that utilizes the pixel gradient ...
Despite recent advances, CNN-based approaches typically apply filters in a spatially invariant way and adopt similar pixel-level losses, which treat all regions of the CT image equally and can be inefficient ...
(a)
(b)
(c)
(d)
(e)
(f)
(g)
Table 2 . 2 Denoising performance (PSNR/SSIM) by RED-
CNN and KPN models trained with different loss functions.
L1 denotes the L1 loss. ...
arXiv:2105.14758v3
fatcat:nlciihoyszealfxmhnmlan3qim
A detail preserving neural network model for Monte Carlo denoising
2020
Computational Visual Media
The features are extracted separately and then integrated into a shallow kernel predictor. Our loss function considers perceptual loss, which also improves detail preservation. ...
The network predicts kernels which are then applied to the noisy input. ...
Loss function validation To validate the effects of perceptual loss used in our loss function, we compare our method with and without perceptual loss in Fig. 11 . ...
doi:10.1007/s41095-020-0167-7
fatcat:cozemwjkkzbwdc2hkwlzyalwxe
A survey on deep learning-based Monte Carlo denoising
2021
Computational Visual Media
Recent years have seen increasing attention and significant progress in denoising MC rendering with deep learning, by training neural networks to reconstruct denoised rendering results from sparse MC samples ...
However, the integration has to balance estimator bias and variance, which causes visually distracting noise with low sample counts. ...
filter
Bako et al. [16]
PD
PA
kernel
image
O, 16
CNN, kernel-predicting network
Vogels et al. [9]
PD, SD, AS
PA
kernel
image
O, 16
CNN, asymmetric loss functions
Back et al. [20] ...
doi:10.1007/s41095-021-0209-9
fatcat:pki3mbbpw5fjxf6oaphcnlto4a
Identification of Unsound Grains in Wheat Using Deep Learning and Terahertz Spectral Imaging Technology
2022
Agronomy
the images with only denoising and feature extraction. ...
As validated by the ResNet-50 classification network, the proposed model processes images with an accuracy of 94.8%, and the recognition accuracy is improved by 3.7% and 1.9%, respectively, compared to ...
The loss function for asymmetric learning is also proposed, which adjusts the denoising results interactively in the network to further enhance the denoising effect. ...
doi:10.3390/agronomy12051093
fatcat:uorec5u3uzegta4fjhrdw5flmi
Lightweight Image Restoration Network for Strong Noise Removal in Nuclear Radiation Scenes
2021
Sensors
The entire network adopts a Mish activation function and asymmetric convolutions to improve the overall performance. ...
The TLU is at the bottom of the NLU and learns textures through an independent loss. ...
Acknowledgments: The authors are grateful for the experimental platform and resources provided by the Sichuan Province Key Laboratory of Special Environmental Robotics. ...
doi:10.3390/s21051810
pmid:33807719
fatcat:y73v43mpqjen3gz4cvp7sprub4
A Multifeature Extraction Method Using Deep Residual Network for MR Image Denoising
2020
Computational and Mathematical Methods in Medicine
Finally, the joint loss function is defined by combining the perceptual loss and the traditional mean square error loss. ...
First, the feature extraction layer is constructed by combining three different sizes of convolution kernels, which are used to obtain multiple shallow features for fusion and increase the network's multiscale ...
MSE Loss Function. The pixel-by-pixel loss function uses the traditional MSE method to calculate the MSE of the real target and the predicted target. ...
doi:10.1155/2020/8823861
pmid:33204301
pmcid:PMC7665932
fatcat:46x3kskctncq3nt6wrqzgrb7oy
Research on Image Denoising and Super-Resolution Reconstruction Technology of Multiscale-Fusion Images
2021
Mobile Information Systems
loss measurement in our devised loss function. ...
strategy, our method is capable of using the multiple convolution kernels with different sizes to expand the receptive field in parallel. (3) The ablation experiments verify the effectiveness of each employed ...
Moreover, the second method is to decompose a symmetric convolution kernel into multiple small asymmetric convolution kernels. ...
doi:10.1155/2021/5184688
fatcat:wy4y3hrysffudk52pm5mmve24m
Real Image Denoising with Feature Attention
[article]
2020
arXiv
pre-print
To advance the practicability of denoising algorithms, this paper proposes a novel single-stage blind real image denoising network (RIDNet) by employing a modular architecture. ...
Furthermore, the evaluation in terms of quantitative metrics and visual quality on three synthetic and four real noisy datasets against 19 state-of-the-art algorithms demonstrate the superiority of our ...
There are several choices available as loss function for optimization such as 2 [63, 64, 8] , perceptual loss [35, 31] , total variation loss [35] and asymmetric loss [31] . ...
arXiv:1904.07396v2
fatcat:j5wzfqf5uzf4hdlt6lwe7mtmim
Digital Gimbal: End-to-end Deep Image Stabilization with Learnable Exposure Times
[article]
2021
arXiv
pre-print
We demonstrate this method's advantage over the traditional approach of deblurring a single image or denoising a fixed-exposure burst on both synthetic and real data. ...
These devices, however, are often physically cumbersome and expensive, limiting their widespread use. ...
The predicted kernel size is 5 × 5. Our loss hyperparameters are µ = 1, α = 0.9999886, β = 100. ...
arXiv:2012.04515v4
fatcat:vsczuc7ovvcprpn3uhyal3nhnu
Image denoising in nonlinear scale-spaces: automatic scale selection via cross-validation
2005
IEEE International Conference on Image Processing 2005
This paper considers optimal scale selection when nonlinear diffusion and morphological scale-spaces are utilized for image denoising. ...
The proposed novel algorithms do not require knowledge of the noise variance, have acceptable computational cost and are readily integrated with a wide class of scale-space inducing processes which require ...
choices of the loss function L ∈ {L1, L2}. ...
doi:10.1109/icip.2005.1529792
dblp:conf/icip/PapandreouM05
fatcat:ewugrip3xvgpzicols2fo362yq
A Tour of Modern Image Filtering: New Insights and Methods, Both Practical and Theoretical
2013
IEEE Signal Processing Magazine
True MSEs, for the asymmetric filters and their symmetrized versions, are estimated through Monte-Carlo simulations, and they all match very well with the predicted ones (see Figures 15 and 16).
[[ FIg15 ...
) function ( ) K $ is a a symmetric function with respect to the indices i and j. ( ) K $ is also a positive valued and unimodal function that measures the "similarity" between the samples yi and , y j ...
With global parametric and variational methods, the solutions are often implemented using large-scale global optimization techniques, which can be hard (sometimes even impossible) to "kernelize," for algorithmic ...
doi:10.1109/msp.2011.2179329
fatcat:f7q4moozmvecpoypv4mfxx52iu
A Wavenet for Speech Denoising
[article]
2018
arXiv
pre-print
Specifically, the model makes use of non-causal, dilated convolutions and predicts target fields instead of a single target sample. ...
In order to overcome this limitation, we propose an end-to-end learning method for speech denoising based on Wavenet. ...
Participants were presented with 4 variants of each sample: i) the original mix with speech and background-noise, ii) clean speech, iii) speech denoised by Wiener filtering, and iv) speech denoised with ...
arXiv:1706.07162v3
fatcat:vdkca5b4ifaidfl3ltrlbisb7m
Multi-Feature Guided Low-Light Image Enhancement
2021
Applied Sciences
Through these methods, our network can effectively denoise and enhance images. ...
In this paper, the feature extraction is guided by the illumination map and noise map, and then the neural network is trained to predict the local affine model coefficients in the bilateral space. ...
to the three loss functions. ...
doi:10.3390/app11115055
fatcat:njjk5q4zqbhxvcnpoopauhgqga
CNN Hyperparameter Optimization using Random Grid Coarse-to-fine Search for Face Classification
2021
Kinetik
The SELU activation function used in the next step was the one with the best average performance. ...
The optimized hyperparameters were those connected to the structure network including activation function, the number of kernel, the size of kernel, and the number of nodes on the fully connected layers ...
After convolutional and pooling layers, feed forward neural network with fully connected layer is added with an activation function, and the last layer, the output layer is a predictive layer with softmax ...
doi:10.22219/kinetik.v6i1.1185
fatcat:b3fsnhseeferljspxovd25h4cy
JDSR-GAN: Constructing A Joint and Collaborative Learning Network for Masked Face Super-Resolution
[article]
2021
arXiv
pre-print
The discriminator utilizes some carefully designed loss functions to ensure the quality of the recovered face images. ...
Given a low-quality face image with the mask as input, the role of the generator composed of a denoising module and super-resolution module is to acquire a high-quality high-resolution face image. ...
Loss Functions Asymmetric loss and total variation (TV) regularization. ...
arXiv:2103.13676v1
fatcat:5mth6cuojjdxzax2f5naun4xzq
« Previous
Showing results 1 — 15 out of 980 results