Filters








1,500 Hits in 6.5 sec

Pay Less Attention with Lightweight and Dynamic Convolutions [article]

Felix Wu, Angela Fan, Alexei Baevski, Yann N. Dauphin, Michael Auli
2019 arXiv   pre-print
Next, we introduce dynamic convolutions which are simpler and more efficient than self-attention.  ...  In this paper, we show that a very lightweight convolution can perform competitively to the best reported self-attention results.  ...  Figure 2 : 2 Illustration of self-attention, lightweight convolutions and dynamic convolutions.  ... 
arXiv:1901.10430v2 fatcat:z7ptqoo2h5eqfg33rplxzxta6i

Fine-Grained Grape Leaf Diseases Recognition Method Based on Improved Lightweight Attention Network

Peng Wang, Tong Niu, Yanru Mao, Bin Liu, Shuqin Yang, Dongjian He, Qiang Gao
2021 Frontiers in Plant Science  
Then, with ShuffleNet-v2 as the backbone, an efficient channel attention strategy is introduced to strengthen the ability of the model for extracting fine-grained lesion features.  ...  To solve the above problems, a cross-channel interactive attention mechanism-based lightweight model (ECA-SNet) is proposed.  ...  AUTHOR CONTRIBUTIONS PW, YM, and TN collected data. PW designed and performed the experiment, analyzed data, trained algorithms, and wrote the manuscript.  ... 
doi:10.3389/fpls.2021.738042 pmid:34745172 pmcid:PMC8569304 fatcat:4juukear4bhqndnz4u27zkpvee

NASNet: A Neuron Attention Stage-by-Stage Net for Single Image Deraining [article]

Xu Qin, Zhilin Wang
2020 arXiv   pre-print
For one thing, we pay more attention on the Neuron relationship and propose a lightweight Neuron Attention (NA) architectural mechanism.  ...  We concatenate and fuse stage-level information dynamically by NA module.  ...  convolution operations with short skip connection, adaptive and dynamic recalibration by NA.  ... 
arXiv:1912.03151v2 fatcat:n5p2hlpogrgcdcrz44fxefedem

Anchor-based Plain Net for Mobile Image Super-Resolution [article]

Zongcai Du, Jie Liu, Jie Tang, Gangshan Wu
2021 arXiv   pre-print
Along with the rapid development of real-world applications, higher requirements on the accuracy and efficiency of image super-resolution (SR) are brought forward.  ...  First, we conduct an experiment about meta-node latency by decomposing lightweight SR architectures, which determines the portable operations we can utilize.  ...  The capacity of lightweight models is limited, so recent architecture designs pay attention to making full use of information of different levels as has been stated above.  ... 
arXiv:2105.09750v2 fatcat:uewklafhbnhlnjbguvxzr5syd4

LADNet: an ultra-lightweight and efficient Dilated Residual Network with Light-Attention Module

Junyan Yang, Jie Jiang, Yujie Fang, Jiahao Sun
2021 IEEE Access  
With the Light-Attention module, LADNet reduces the parameters of convolution neural network to 71% of the original.  ...  So, this work proposes a spatial and channel hybrid attention module (Light-Attention module), an ultra-lightweight but efficient attention module.  ...  The parameters of the network are less, and it is easier to train. It can be said that Light-Attention module is the first super ultra-lightweight attention module at present.  ... 
doi:10.1109/access.2021.3065338 fatcat:nqqjgzirfrcuzftgphimzqft64

Lightweight Multilevel Feature Fusion Network for Hyperspectral Image Classification

Miaomiao Liang, Huai Wang, Xiangchun Yu, Zhe Meng, Jianbing Yi, Licheng Jiao
2021 Remote Sensing  
The LMFN decouples spectral–spatial feature extraction into two modules: point-wise 3D convolution to learn correlations between adjacent bands with no spatial perception, and depth-wise convolution to  ...  In this paper, we propose a novel lightweight multilevel feature fusion network (LMFN) that can achieve satisfactory HSI classification with fewer parameters and a lower computational burden.  ...  The 2D-CNN focuses on spatial local perception and feature recombination, but pays less attention to spectral local perception.  ... 
doi:10.3390/rs14010079 fatcat:u4q4qoa2p5bwldv3zudsf3av2i

Entity–Relation Extraction—A Novel and Lightweight Method Based on a Gate Linear Mechanism

Guangming Peng, Xiong Chen
2020 Electronics  
In this paper, we introduce dynamic convolutions based on lightweight convolutions to process long sequences, which thus reduces the number of parameters to a low level.  ...  In this paper, we propose a new end-to-end model based on dilated convolutional units and the gate linear mechanism as an alternative to those recurrent models.  ...  Dynamic Convolution In this paper, we abandon the idea of using a self-attention mechanism and introduce multi-channel integrated dynamic convolutions based on lightweight convolutions.  ... 
doi:10.3390/electronics9101637 fatcat:sl2rjznrtjfdnfe5l4vooqdcv4

CLNet: Complex Input Lightweight Neural Network designed for Massive MIMO CSI Feedback [article]

Sijie Ji, Mo Li
2021 arXiv   pre-print
The experiment result shows that CLNet outperforms the state-of-the-art method by average accuracy improvement of 5.41% in both outdoor and indoor scenarios with average 24.1% less computational overhead  ...  Recently, numerous deep learning based CSI feedback approaches demonstrate their efficiency and potential.  ...  In order to pay more attention to such clusters, CLNet employs a CBAM block [17] to distinguish them with weights in the spatial domain as Figure 3 illustrates.  ... 
arXiv:2102.07507v2 fatcat:lvpvlc4oljgnllaa4tejtd7jma

Deeper Siamese Network with Stronger Feature Representation for Visual Tracking

Chaoyi Zhang, Howard Wang, Jiwei Wen, Li Peng
2020 IEEE Access  
Thirdly, we incorporate a novel lightweight residual channel attention mechanism into the backbone network, which expands the weight gap between different channels and helps the network pay more attention  ...  features and utilization of channel attention mechanism, have been made in this paper.  ...  Thirdly, we equip the network with a lightweight residual channel attention mechanism, which could enlarge the importance gap between different channels in the same layer, and make the network pay more  ... 
doi:10.1109/access.2020.3005511 fatcat:6kcbre5ozfhwbnydmh5qnmhjfu

Efficient Image Super-Resolution via Self-Calibrated Feature Fuse

Congming Tan, Shuli Cheng, Liejun Wang
2022 Sensors  
Compared with the existing transposed convolution, it can greatly reduce the computation burden of the network without reducing the reconstruction effect.  ...  However, due to a large amount of computation and parameters, SR technology is greatly limited in devices with limited computing power. To trade-off the network performance and network parameters.  ...  In the future work, we will continue to explore the lightweight of SR network and try to introduce non-parametric attention mechanism or dynamic convolution layer to enhance information extraction in the  ... 
doi:10.3390/s22010329 pmid:35009871 pmcid:PMC8749868 fatcat:2xn2ys6q5bfrvn46jf3k6t4k5i

Walnut Ripeness Detection Based on Coupling Information and Lightweight YOLOv4

Kaixuan Cui, Shuchai Su, Jiawei Cai, Fengjun Chen
2022 North atlantic university union: International Journal of Circuits, Systems and Signal Processing  
We design a parallel convolution structure with depthwise convolution stacking (PCSDCS) to reduce parameters and improve feature extraction ability.  ...  Compared with the Faster R-CNN model, EfficientDet-D1 model, YOLOv3 model, and YOLOv4 model, the lightweight YOLOv4 model improves 8.77%, 4.84%, 5.43%, and 0.06% in mean average precision, 74.60 FPS, 55.60  ...  Since Ripe is the optimum harvest time, we need to pay more attention to the test results under this ripeness.  ... 
doi:10.46300/9106.2022.16.29 fatcat:beyk7o75dfbzzcmammxizvakqa

A Novel Malware Detection and Family Classification Scheme for IoT Based on DEAM and DenseNet

Changguang Wang, Ziqiu Zhao, Fangwei Wang, Qingru Li, Athanasios V. Vasilakos
2021 Security and Communication Networks  
The DEAM is a general lightweight attention module improved based on the Convolutional Block Attention Module (CBAM), which can strengthen the attention to the characteristics of malware and improve the  ...  In this paper, a new simple and effective attention module of Convolutional Neural Networks (CNNs), named as Depthwise Efficient Attention Module (DEAM), is proposed and combined with a DenseNet to propose  ...  Conclusion is paper proposes a new lightweight and effective convolutional neural network attention module that is defined as DEAM, and combines it with the DenseNet for malware detection and family classification  ... 
doi:10.1155/2021/6658842 fatcat:u2lplch3ardynn2tggpdxca7gy

Fast and Lightweight Human Pose Estimation

Haopan Ren, Wenming Wang, Kaixiang Zhang, Dejian Wei, Yanyan Gao, Yue Sun
2021 IEEE Access  
We pay attention to single-person pose estimation, which is the basis of relevant vision tasks, such as multi-person pose estimation, video-based pose estimation, and pose tracking.  ...  and attention mechanism.  ... 
doi:10.1109/access.2021.3069102 fatcat:j2suqvcmobchrft63uzd5ru77u

Image Reconstruction of Multibranch Feature Multiplexing Fusion Network with Mixed Multilayer Attention

Yuxi Cai, Guxue Gao, Zhenhong Jia, Huicheng Lai
2022 Remote Sensing  
network pay more attention to the key channel information and benefit from it.  ...  Image super-resolution reconstruction achieves better results than traditional methods with the help of the powerful nonlinear representation ability of convolution neural network.  ...  A lightweight enhanced residual channel attention (LERCA) is proposed, which can pay more attention to the high-frequency information in low resolution space.  ... 
doi:10.3390/rs14092029 fatcat:6olmsffpyna55e5oyajp6dmjr4

Rubber Leaf Disease Recognition Based on Improved Deep Convolutional Neural Networks With a Cross-Scale Attention Mechanism

Tiwei Zeng, Chengming Li, Bin Zhang, Rongrong Wang, Wei Fu, Juan Wang, Xirui Zhang
2022 Frontiers in Plant Science  
Compared with MobileNetV1, V2, and ShuffleNetV1, V2 lightweight models, the model parameters and size are reduced by more than half, but the recognition accuracy is also improved by 3.86–6.1%.  ...  Specifically, the model uses a group convolution structure to reduce model parameters and provide multiple branches and then embeds multiple dilated convolutions to improve the model's adaptability to  ...  with high heatmap accuracy and pays minimum attention to the irrelevant complex background, thus achieving higher disease recognition accuracy than other models.  ... 
doi:10.3389/fpls.2022.829479 pmid:35295638 pmcid:PMC8918928 fatcat:vxu2pgoti5aidegk3ltkgdl7bu
« Previous Showing results 1 — 15 out of 1,500 results