Filters








948 Hits in 4.6 sec

Fast, Accurate and Lightweight Super-Resolution with Neural Architecture Search [article]

Xiangxiang Chu and Bo Zhang and Hailong Ma and Ruijun Xu and Qingyuan Li
2020 arXiv   pre-print
Recent contributions are struggling to manually maximize this balance, while our work achieves the same goal automatically with neural architecture search.  ...  Specifically, we handle super-resolution with a multi-objective approach.  ...  Our main contributions can be summarized in the following four aspects, • releasing several fast, accurate and lightweight superresolution architectures and models (FALSR-A being the best regarding visual  ... 
arXiv:1901.07261v3 fatcat:hneltmxpq5cvxhcx2qi6pvkn7y

Fast Neural Architecture Search for Lightweight Dense Prediction Networks [article]

Lam Huynh, Esa Rahtu, Jiri Matas, Janne Heikkila
2022 arXiv   pre-print
We present LDP, a lightweight dense prediction neural architecture search (NAS) framework.  ...  The performance of LPD is evaluated on monocular depth estimation, semantic segmentation, and image super-resolution tasks on diverse datasets, including NYU-Depth-v2, KITTI, Cityscapes, COCO-stuff, DIV2K  ...  Image Super-resolution on Unban100 tend to produce both accurate and lightweight architectures.  ... 
arXiv:2203.01994v3 fatcat:nnz34pody5banfrqpkaanpszau

Efficient Residual Dense Block Search for Image Super-Resolution [article]

Dehua Song, Chang Xu, Xu Jia, Yiyi Chen, Chunjing Xu, Yunhe Wang
2019 arXiv   pre-print
Focusing on this issue, we propose an efficient residual dense block search algorithm with multiple objectives to hunt for fast, lightweight and accurate networks for image super-resolution.  ...  Secondly, network architecture is evolved with the guidance of block credits to acquire accurate super-resolution network.  ...  Besides, several works (Ahn, Kang, and Sohn 2018; Hui, Wang, and Gao 2018) Super-resolution Neural Architecture Search To acquire fast, lightweight and accurate super-resolution networks, evolution-based  ... 
arXiv:1909.11409v3 fatcat:onw4yqeigbbv5gbutwby4ujace

Efficient Residual Dense Block Search for Image Super-Resolution

Dehua Song, Chang Xu, Xu Jia, Yiyi Chen, Chunjing Xu, Yunhe Wang
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Focusing on this issue, we propose an efficient residual dense block search algorithm with multiple objectives to hunt for fast, lightweight and accurate networks for image super-resolution.  ...  Secondly, network architecture is evolved with the guidance of block credits to acquire accurate super-resolution network.  ...  (Dong et al. 2014) Super-resolution Neural Architecture Search To acquire fast, lightweight and accurate super-resolution networks, evolution-based NAS algorithm are employed and adapted to super-resolution  ... 
doi:10.1609/aaai.v34i07.6877 fatcat:cbxpx3oxmvd5rolu6diwcxttmu

Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution [article]

Haotian Tang, Zhijian Liu, Shengyu Zhao, Yujun Lin, Ji Lin, Hanrui Wang, Song Han
2020 arXiv   pre-print
To explore the spectrum of efficient 3D models, we first define a flexible architecture design space based on SPVConv, and we then present 3D Neural Architecture Search (3D-NAS) to search the optimal network  ...  To this end, we propose Sparse Point-Voxel Convolution (SPVConv), a lightweight 3D module that equips the vanilla Sparse Convolution with the high-resolution point-based branch.  ...  We thank Nick Stathas and Yue Dong for their feedback on the draft. This work is supported by MIT Quest for Intelligence, MIT-IBM Watson AI Lab, Xilinx and Samsung.  ... 
arXiv:2007.16100v2 fatcat:omt5fknvsndhlboulnvy6chh2i

Neural Architecture Design for GPU-Efficient Networks [article]

Ming Lin, Hesen Chen, Xiuyu Sun, Qi Qian, Hao Li, Rong Jin
2020 arXiv   pre-print
This design principle enables us to search for GPU-efficient network structures effectively by a simple and lightweight method as opposed to most Neural Architecture Search (NAS) methods that are complicated  ...  Although many studies are devoted to optimizing the structure of deep models for efficient inference, most of them do not leverage the architecture of modern GPU for fast inference, leading to suboptimal  ...  This design principle simplifies the the design space and therefore enables us to use a simple and lightweight Neural Architecture Search (NAS) method to search for GPU-efficient network effectively.  ... 
arXiv:2006.14090v4 fatcat:deay7e3j6ves5llwzbanx57q54

Neural Architecture Search for Lightweight Non-Local Networks [article]

Yingwei Li, Xiaojie Jin, Jieru Mei, Xiaochen Lian, Linjie Yang, Cihang Xie, Qihang Yu, Yuyin Zhou, Song Bai, Alan Yuille
2020 arXiv   pre-print
Secondly, by relaxing the structure of the LightNL block to be differentiable during training, we propose an efficient neural architecture search algorithm to learn an optimal configuration of LightNL  ...  Firstly, we propose a Lightweight Non-Local (LightNL) block by squeezing the transformation operations and incorporating compact features.  ...  Our proposed searching algorithm is fast and delivers high-performance lightweight models.  ... 
arXiv:2004.01961v1 fatcat:dqjs3xz5pneovctcg43dnv5tee

M-FasterSeg: An Efficient Semantic Segmentation Network Based on Neural Architecture Search [article]

Huiyu Kuang
2021 arXiv   pre-print
First, a neural network search method NAS (Neural Architecture Search) is used to find a semantic segmentation network with multiple resolution branches.  ...  This paper proposes an improved structure of a semantic segmentation network based on a deep learning network that combines self-attention neural network and neural network architecture search methods.  ...  Fast neural network architecture search technology The fast neural network architecture search technology comes from the Neural Architecture Search (NAS) (Zoph and Le, 2016) .  ... 
arXiv:2112.07918v2 fatcat:xjvaidohrnh6hiafis475umnqe

Balanced Two-Stage Residual Networks for Image Super-Resolution

Yuchen Fan, Honghui Shi, Jiahui Yu, Ding Liu, Wei Han, Haichao Yu, Zhangyang Wang, Xinchao Wang, Thomas S. Huang
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
We evaluated our models on the New Trends in Image Restoration and Enhancement workshop and challenge on image super-resolution (NTIRE SR 2017).  ...  The deep residual design with constrained depth achieves the optimal balance between the accuracy and the speed for super-resolving images.  ...  While deep models tend to yield high accuracy, they lead to very heavy computation in the task of searching the best deep network architecture as well as training and testing the deep neural networks.  ... 
doi:10.1109/cvprw.2017.154 dblp:conf/cvpr/FanSYLHYWWH17 fatcat:civz7z2ogvfbzn6cbrkfinjywa

Trends in Super-High-Definition Imaging Techniques Based on Deep Neural Networks

Hyung-Il Kim, Seok Bong Yoo
2020 Mathematics  
The super-high-definition imaging technology based on deep neural networks improves the image resolution as well as effectively removes the undesired compressed Poisson noises that may occur during real  ...  This solution of using deep neural networks at the receiving end to solve the image degradation problem can be used in the intelligent image analysis platform that performs accurate image processing and  ...  Herein, the end-to-end mapping between the low and high-resolution images is represented as a deep CNN with a lightweight structure.  ... 
doi:10.3390/math8111907 fatcat:du52r6z5nneh5lmwbtmacbr5jm

Single-Image Super-Resolution Neural Network via Hybrid Multi-Scale Features

Wenfeng Huang, Xiangyun Liao, Lei Zhu, Mingqiang Wei, Qiong Wang
2022 Mathematics  
In summary, we propose a novel multi-scale super-resolution neural network (HMSF), which is more lightweight, has fewer parameters, and requires less execution time, but has better performance than the  ...  Experiments on five popular benchmarks demonstrate that our super-resolution approach achieves better performance with fewer parameters and less memory consumption, compared to more than 20 SOTAs.  ...  Conclusions In this paper, we propose a lightweight and fast super-resolution method, based on hybrid multi-scale features.  ... 
doi:10.3390/math10040653 fatcat:6u2u6b2x5vb6rmc57wbjgnc46e

EMC2-NIPS 2019 Abstracts of Invited Talks

2019 2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing - NeurIPS Edition (EMC2-NIPS)  
estimation, super-resolution, localization and mapping.  ...  This talk will uncover the need for building accurate, platform-specific power and latency models for convolutional neural networks (CNNs) and efficient hardware-aware CNN design methodologies, thus allowing  ...  In line with this trend, there has been an active body of research on both algorithms and hardware architectures for neural network specialization.  ... 
doi:10.1109/emc2-nips53020.2019.00007 fatcat:bvtcsgwxsrh3bmwh6tba3ly3ra

AIM 2020 Challenge on Efficient Super-Resolution: Methods and Results [article]

Kai Zhang, Martin Danelljan, Yawei Li, Radu Timofte, Jie Liu, Jie Tang, Gangshan Wu, Yu Zhu, Xiangyu He, Wenjie Xu, Chenghua Li, Cong Leng (+73 others)
2020 arXiv   pre-print
This paper reviews the AIM 2020 challenge on efficient single image super-resolution with focus on the proposed solutions and results.  ...  The challenge task was to super-resolve an input image with a magnification factor x4 based on a set of prior examples of low and corresponding high resolution images.  ...  Acknowledgements We thank the AIM 2020 sponsors: HUAWEI, MediaTek, Google, NVIDIA, Qualcomm, and Computer Vision Lab (CVL) ETH Zurich. A Teams and affiliations  ... 
arXiv:2009.06943v1 fatcat:2s7k5wsgsjgo5flnqaby26cn64

DAQ: Channel-Wise Distribution-Aware Quantization for Deep Image Super-Resolution Networks [article]

Cheeun Hong, Heewon Kim, Sungyong Baik, Junghun Oh, Kyoung Mu Lee
2021 arXiv   pre-print
Quantizing deep convolutional neural networks for image super-resolution substantially reduces their computational costs.  ...  Our new method outperforms recent training-free and even training-based quantization methods to the state-of-the-art image super-resolution networks in ultra-low precision.  ...  Learning to quantize deep networks by opti- Li, and Qingyuan Li. Fast, accurate and lightweight super- mizing quantization intervals with task loss. In CVPR, 2019.  ... 
arXiv:2012.11230v2 fatcat:vx5j3lnhozfozkyndqaiqhr4iy

Transformer for Single Image Super-Resolution [article]

Zhisheng Lu, Juncheng Li, Hong Liu, Chaoyan Huang, Linlin Zhang, Tieyong Zeng
2022 arXiv   pre-print
Single image super-resolution (SISR) has witnessed great strides with the development of deep learning.  ...  In this paper, we propose a novel Efficient Super-Resolution Transformer (ESRT) for SISR.  ...  They can be mainly divided into two categories: the architecture manual-designed methods and the neural architecture search-based methods.  ... 
arXiv:2108.11084v3 fatcat:uwzxpl6oonarlkquww5b3feckq
« Previous Showing results 1 — 15 out of 948 results