A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
DEM Super-Resolution with EfficientNetV2
[article]
2021
arXiv
pre-print
Efficient climate change monitoring and modeling rely on high-quality geospatial and environmental datasets. ...
Digital Elevation Model (DEM) datasets are such examples whereas their low-resolution versions are widely available, high-resolution ones are scarce. ...
In the proposed model, we modify the MobileNetV3 blocks to use for super-resolution tasks. ...
arXiv:2109.09661v1
fatcat:lufkvcccmvgq5od2dio2aa4iry
SqueezeNAS: Fast neural architecture search for faster semantic segmentation
[article]
2019
arXiv
pre-print
While Neural Architecture Search (NAS) has been effectively used to develop low-latency networks for image classification, there has been relatively little effort to use NAS to optimize DNN architectures ...
We also compare our networks to the efficient segmentation networks proposed in MobileNetV3 [36] . ...
Sampling from the Gumbel-Softmax distribution allows us to efficiently optimize the architecture distribution by using gradient descent on the stochastic supernetwork. ...
arXiv:1908.01748v2
fatcat:knlqepbolrdejigqwmnda5pzwq
Multi-path Neural Networks for On-device Multi-domain Visual Classification
[article]
2021
arXiv
pre-print
MobileNetV3-like search space. ...
MobileNetV3-like architectures. ...
Based on the singledomain efficient NAS framework [1] , the proposed multipath NAS for MDL uses multiple reinforcement learning (RL) controllers, where each selects an optimal path from the super-network ...
arXiv:2010.04904v2
fatcat:x4ietulpqrcydhfumtxisy5pra
MicroNet: Improving Image Recognition with Extremely Low FLOPs
[article]
2021
arXiv
pre-print
For instance, under the constraint of 12M FLOPs, MicroNet achieves 59.4\% top-1 accuracy on ImageNet classification, outperforming MobileNetV3 by 9.6\%. ...
[44] adapts image resolution to achieve efficient inference. Another line of work keeps the architectures fixed, but adapts parameters. ...
Note that input resolution 224×224 is used for MicroNet and related works other than HBONet/TinyNet, whose input resolution is shown in the bracket. ...
arXiv:2108.05894v1
fatcat:ablts26dijbfznzippauc2vioa
AIM 2020 Challenge on Efficient Super-Resolution: Methods and Results
[article]
2020
arXiv
pre-print
They gauge the state-of-the-art in efficient single image super-resolution. ...
This paper reviews the AIM 2020 challenge on efficient single image super-resolution with focus on the proposed solutions and results. ...
The core of this approach is to use modified MobileNetV3 [16] blocks to design an efficient method for SR. ...
arXiv:2009.06943v1
fatcat:2s7k5wsgsjgo5flnqaby26cn64
Model Rubik's Cube: Twisting Resolution, Depth and Width for TinyNets
[article]
2020
arXiv
pre-print
The giant formula for simultaneously enlarging the resolution, depth and width provides us a Rubik's cube for neural networks. ...
So that we can find networks with high efficiency and excellent performance by twisting the three dimensions. ...
The straightforward way for designing tiny networks is to apply the experience used in Efficient-Net [44] . ...
arXiv:2010.14819v2
fatcat:v3rnohc26bee7o6lfbg3t3ytwa
Factorizing and Reconstituting Large-Kernel MBConv for Lightweight Face Recognition
2019
2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)
We combine FR-MBConv with MobileNetV3 [16] to build a lightweight face recognition model. ...
NAS normally use hand-craft MBConv as building block. However, they mainly searched for block-related hyperparameters, and the structure of MBConv itself was largely overlooked. ...
In this section, based on a SOTA hardware-aware efficient MobileNetV3-large, we built a lightweight face recognition model. ...
doi:10.1109/iccvw.2019.00329
dblp:conf/iccvw/LyuJZHC19
fatcat:zodpnifw5vbxtd6hscbtxigf3a
Towards Efficient and Data Agnostic Image Classification Training Pipeline for Embedded Systems
[article]
2021
arXiv
pre-print
Resulting models are computationally efficient and can be deployed to CPU using the OpenVINO toolkit. ...
Nowadays deep learning-based methods have achieved a remarkable progress at the image classification task among a wide range of commonly used datasets (ImageNet, CIFAR, SVHN, Caltech 101, SUN397, etc.) ...
Smith, L.N., Topin, N.: Super-convergence: very fast training of neural networks using large learning rates. In: Defense + Commercial Sensing (2019) 34. ...
arXiv:2108.07049v1
fatcat:c3i2mav6g5cr7p2rprtnqzhkfa
Application of Ghost-DeblurGAN to Fiducial Marker Detection
[article]
2022
arXiv
pre-print
The datasets and codes used in this paper are available at: https://github.com/York-SDCNLab/Ghost-DeblurGAN. ...
Compared with BN which is not recommended in low-level tasks, such as super-resolution [37] , IN preserves more scale information and maintains the same normalization procedure in training and inference ...
However, degradation occurs in terms of PSNR when using MobileNetV3 as the backbone, while the deblurring quality of GhostNet is comparable with that of MobileNetV2. ...
arXiv:2109.03379v3
fatcat:7tsa5qcgnzfalj2pwugju5eksy
Network Augmentation for Tiny Deep Learning
[article]
2022
arXiv
pre-print
At test time, only the tiny model is used for inference, incurring zero inference overhead. We demonstrate the effectiveness of NetAug on image classification and object detection. ...
Figure 5 demonstrates the results of YoloV3+MobileNetV2 w0.35 and YoloV3+MobileNetV3 w0.35 under different input resolutions. ...
Its goal is to provide efficient performance estimation in NAS. ...
arXiv:2110.08890v2
fatcat:l5pcfamu6zghnl7loxlbyjyzbm
Fast Neural Architecture Search for Lightweight Dense Prediction Networks
[article]
2022
arXiv
pre-print
The performance of LPD is evaluated on monocular depth estimation, semantic segmentation, and image super-resolution tasks on diverse datasets, including NYU-Depth-v2, KITTI, Cityscapes, COCO-stuff, DIV2K ...
Starting from a pre-defined generic backbone, LDP applies the novel Assisted Tabu Search for efficient architecture exploration. ...
Image super-resolution task has also been immensely improved using deep neural networks. Dong et al. ...
arXiv:2203.01994v3
fatcat:nnz34pody5banfrqpkaanpszau
Plant Leaf Disease Recognition Using Depth-Wise Separable Convolution-Based Models
2021
Symmetry
Besides, we have simulated our DSCPLD models using both full plant leaf images and segmented plant leaf images and conclude that, after using modified ACS, all models increase their accuracy and F1-score ...
For this reason, initially, modified adaptive centroid-based segmentation (ACS) is used to trace the proper region of interest (ROI). ...
A concrete representation of experiments on MobileNetV3 with width multipliers.Table 16. A concrete representation of experiments on MobileNetV3 with resolutions. ...
doi:10.3390/sym13030511
fatcat:2vljqd2wenerrlzzbr6kgcerhy
Discovering Multi-Hardware Mobile Models via Architecture Search
[article]
2021
arXiv
pre-print
single multi-hardware model yields similar or better results than SoTA performance on accelerators like GPU, DSP and EdgeTPU which was achieved by different models, while having similar performance with MobilenetV3 ...
Le, Hartwig Adam for helpful feedback and discussion; Cheng-Ming Chiang, Guan-Yu Chen, Koan-Sin Tan, Yu-Chieh Lin from MediaTek for useful guidance on Medi-aTek benchmarks; and QCT (Qualcomm CDMA Technologies ...
Architecture search and training: We use ImageNet data [9] to search, train and evaluate. Input resolution is 224×224 and ResNet data preprocessing is used. ...
arXiv:2008.08178v2
fatcat:sg2qrauyjvesvpdjszt567o4aa
MUXConv: Information Multiplexing in Convolutional Neural Networks
[article]
2020
arXiv
pre-print
On ImageNet, the resulting models, dubbed MUXNets, match the performance (75.3 of MobileNetV3 while being 1.6× more compact, and outperform other mobile models in all the three criteria. ...
optimizing accuracy, compactness, and computational efficiency. ...
This idea has also been particularly effective for image super-resolution [35] in the form of "subpixel" convolution. ...
arXiv:2003.13880v2
fatcat:shlwiywymve5zhfujzltoaotba
S CNet: Monocular Depth Completion for Autonomous Systems and 3D Reconstruction
[article]
2019
arXiv
pre-print
In this paper, a lightweight yet efficient network (S\&CNet) is proposed to obtain a good trade-off between efficiency and accuracy for the dense depth completion. ...
The CAM [26] pointed out the same issue and proposed a global variance pooling based SE module to improve their performance on the super-resolution. ...
Most recently, MobileNetV3 [23] further improved the performance of efficient network by introducing the squeeze-and-excitation module. ...
arXiv:1907.06071v2
fatcat:xkjf776psjahdbylpfm26aeyu4
« Previous
Showing results 1 — 15 out of 80 results