Filters








572 Hits in 4.3 sec

Feature-Align Network with Knowledge Distillation for Efficient Denoising [article]

Lucas D. Young, Fitsum A. Reda, Rakesh Ranjan, Jon Morton, Jun Hu, Yazhu Ling, Xiaoyu Xiang, David Liu, Vikas Chandra
2021 arXiv   pre-print
We propose an efficient neural network for RAW image denoising.  ...  allows knowledge distillation from large denoising networks in the form of a perceptual content loss. (3) Empirical analysis of our efficient model trained to specialize on different noise subranges.  ...  Feature-Align Network and Knowledge Distillation for Efficient Noise Modeling Image noise originates at the Bayer RAW domain.  ... 
arXiv:2103.01524v2 fatcat:4vf2kf7xrrenbkskhjy3lngrrq

Unpaired Learning of Deep Image Denoising [article]

Xiaohe Wu, Ming Liu, Yue Cao, Dongwei Ren, Wangmeng Zuo
2020 arXiv   pre-print
In order to facilitate unpaired learning of denoising network, this paper presents a two-stage scheme by incorporating self-supervised learning and knowledge distillation.  ...  As for knowledge distillation, we first apply the learned noise models to clean images to synthesize a paired set of training images, and use the real noisy images and the corresponding denoising results  ...  Unpaired Learning of Deep Image Denoising  ... 
arXiv:2008.13711v1 fatcat:vgqkgznfwfhp3esnwumiz267ou

2021 Index IEEE Transactions on Image Processing Vol. 30

2021 IEEE Transactions on Image Processing  
Departments and other items may also be covered if they have been judged to have archival value. The Author Index contains the primary entry for each item, listed under the first author's name.  ...  The primary entry includes the coauthors' names, the title of the paper or other item, and its location, specified by the publication abbreviation, year, month, and inclusive pagination.  ...  ., +, TIP 2021 4735-4746 Resolution-Aware Knowledge Distillation for Efficient Inference. Feng, Z., Robust Face Alignment by Multi-Order High-Precision Hourglass Network.  ... 
doi:10.1109/tip.2022.3142569 fatcat:z26yhwuecbgrnb2czhwjlf73qu

Collaborative Group Learning [article]

Shaoxiong Feng, Hongshen Chen, Xuancheng Ren, Zhuoye Ding, Kan Li, Xu Sun
2021 arXiv   pre-print
Second, to resist the student homogenization, students first compose diverse feature sets by exploiting the inductive bias from sub-sets of training data, and then aggregate and distill different complementary  ...  In this paper, we propose Collaborative Group Learning, an efficient framework that aims to diversify the feature representation and conduct an effective regularization.  ...  plicit and efficient way to boost knowledge transfer by aligning the intermediate features between selected stu- dents  ... 
arXiv:2009.07712v4 fatcat:y3srfy4w4fa2dnakxaix3pfrxi

Efficient Deep Image Denoising via Class Specific Convolution [article]

Lu Xu, Jiawei Zhang, Xuanye Cheng, Feng Zhang, Xing Wei, Jimmy Ren
2021 arXiv   pre-print
In this paper, we propose an efficient deep neural network for image denoising based on pixel-wise classification.  ...  Despite using a computationally efficient network cannot effectively remove the noises from any content, it is still capable to denoise from a specific type of pattern or texture.  ...  Nowadays, knowledge distilling (Hinton, Vinyals, and Dean 2014) , parameter pruning (Han et al. 2015) and network quantization (Jacob et al. 2018) are widely used to compress the network.  ... 
arXiv:2103.01624v2 fatcat:xngmcqmhvre2rhegx3gujviur4

2020 Index IEEE Transactions on Image Processing Vol. 29

2020 IEEE Transactions on Image Processing  
., +, TIP 2020 5848-5861 Spatiotemporal Knowledge Distillation for Efficient Estimation of Aerial Video Saliency.  ...  ., +, TIP 2020 5408-5419 Spatiotemporal Knowledge Distillation for Efficient Estimation of Aerial Video Saliency.  ... 
doi:10.1109/tip.2020.3046056 fatcat:24m6k2elprf2nfmucbjzhvzk3m

Split-attention Multiframe Alignment Network for Image Restoration

Yongyi Yu, Mingzhe Liu, Huajun Feng, Zhihai Xu, Qi Li
2020 IEEE Access  
AFM uses an attention mechanism to rescale the aligned images and enables the registration network and the subsequent image restoration networks to be trained jointly.  ...  To solve the problem that most existing image registration approaches can only align two images in one inference, we propose a splitattention multiframe alignment network (SAMANet).  ...  performance and computational efficiency.  ... 
doi:10.1109/access.2020.2967028 fatcat:yohbep22zvhgzmkc5q6zhpxr3m

View Blind-spot as Inpainting: Self-Supervised Denoising with Mask Guided Residual Convolution [article]

Yuhongze Zhou, Liguang Zhou, Tin Lun Lam, Yangsheng Xu
2021 arXiv   pre-print
Different from partial convolution and gated convolution, it provides moderate freedom for network learning.  ...  The experiments show that our proposed plug-and-play MGRConv can assist blind-spot based denoising network to reach promising results on both existing single-image based and dataset-based methods.  ...  Acknowledgments Thank Qi Song, Zheng Wang, and Junjie Hu for comments or discussions. Thank Yuejin Li for his support in using GPU clusters.  ... 
arXiv:2109.04970v1 fatcat:oh5vrygglfasphtp46rbq6s5my

Gradient Adversarial Training of Neural Networks [article]

Ayan Sinha, Zhao Chen, Vijay Badrinarayanan, Andrew Rabinovich
2018 arXiv   pre-print
example, (2) for knowledge distillation, we do binary classification of gradient tensors derived from the student or teacher network and tune the student gradient tensor to mimic the teacher's gradient  ...  Specifically, gradient adversarial training increases the robustness of a network to adversarial attacks, is able to better distill the knowledge from a teacher network to a student network compared to  ...  Knowledge distillation We demonstrate GREAT's potential for knowledge distillation on the CIFAR-10 and mini-ImageNet datasets.  ... 
arXiv:1806.08028v1 fatcat:pjqvv7ylevbcfb6y3c7zf7gqda

Blind Image Super-Resolution with Spatial Context Hallucination [article]

Dong Huo, Yee-Hong Yang
2020 arXiv   pre-print
In this paper, we propose a novel Spatial Context Hallucination Network (SCHN) for blind super-resolution without knowing the degradation kernel.  ...  Thus, we integrate denoising, deblurring and super-resolution within one framework to avoid such a problem. We train our model on two high quality datasets, DIV2K and Flickr2K.  ...  Conclusions In this paper, we propose a new spatial context hallucination network for blind SR tasks. To our best knowledge, we are the first one to propose such an idea.  ... 
arXiv:2009.12461v1 fatcat:ew2oiiqt7jfkxhydlyeo7rzsve

Biomedical ontology alignment: an approach based on representation learning

Prodromos Kolyvakis, Alexandros Kalousis, Barry Smith, Dimitris Kiritsis
2018 Journal of Biomedical Semantics  
Our system obtained overall F-scores of 93.2% and 89.2% for these experiments, thus achieving state-of-the-art results.  ...  We performed additional experiments on aligning FMA to NCI Thesaurus and to SNOMED CT based on a reference alignment extracted from the UMLS Metathesaurus.  ...  The resulted hyperparameters controlling the effect of retrofitting k S and knowledge distillation k LD were 10 6 and 10 3 , accordingly.  ... 
doi:10.1186/s13326-018-0187-8 pmid:30111369 pmcid:PMC6094585 fatcat:x4ojf6uht5g45dke7zmty7slue

Person Re-IDentification based on mutual learning with embedded noise block

Xinyue Fan, Jia Zhang, Yang Lin
2021 IEEE Access  
INDEX TERMS Person re-identification, mutual learning, knowledge distillation, network decoupling  ...  In order to overcome the coupling problem in mutual learning, we designed a lightweight noise block and embedded it into mutual learning, which greatly improves the complementarity between networks.  ...  One of the differences between mutual learning and knowledge distillation is that the knowledge transfer of knowledge distillation is one-way, while the knowledge transfer of mutual learning is two-way  ... 
doi:10.1109/access.2021.3102450 fatcat:u2au3zd7tjeb5ituomstvm7sde

To Embed or Not: Network Embedding as a Paradigm in Computational Biology

Walter Nelson, Marinka Zitnik, Bo Wang, Jure Leskovec, Anna Goldenberg, Roded Sharan
2019 Frontiers in Genetics  
In this review, we survey traditional and new approaches for graph embedding and compare their application to fundamental problems in network biology with using the networks directly.  ...  We consider a broad variety of applications including protein network alignment, community detection, and protein function prediction.  ...  APPLICATIONS Network Alignment A basic operation in biological research is to transfer knowledge across species.  ... 
doi:10.3389/fgene.2019.00381 pmid:31118945 pmcid:PMC6504708 fatcat:t4h5izbezrfdbawvvcfutjyzlu

Table of contents

2020 IEEE Transactions on Image Processing  
Dagan Feng 1890 (Contents Continued on Page vi) (Contents Continued from Page v) Spatiotemporal Knowledge Distillation for Efficient Estimation of Aerial Video Saliency .........................  ...  Pinilla, and H. Arguello 2598 Distilling Channels for Efficient Deep Tracking .......................... S. Ge, Z. Luo, C. Zhang, Y. Hua, and D.  ... 
doi:10.1109/tip.2019.2940372 fatcat:h23ul2rqazbstcho46uv3lunku

Knowledge Distillation: A Survey [article]

Jianping Gou, Baosheng Yu, Stephen John Maybank, Dacheng Tao
2021 arXiv   pre-print
In recent years, deep neural networks have been successful in both industry and academia, especially for computer vision tasks.  ...  Furthermore, challenges in knowledge distillation are briefly reviewed and comments on future research are discussed and forwarded.  ...  Unlike distillation in (Chen et al., 2018b; Tang and Wang, 2018) , Pan et al. (2019) designed a enhanced collaborative denoising autoencoder (ECAE) model for recommender systems via knowledge distillation  ... 
arXiv:2006.05525v6 fatcat:aedzaeln5zf3jgjsgsn5kvjrri
« Previous Showing results 1 — 15 out of 572 results