A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Targeted Kernel Networks: Faster Convolutions with Attentive Regularization
[article]
2018
arXiv
pre-print
We propose Attentive Regularization (AR), a method to constrain the activation maps of kernels in Convolutional Neural Networks (CNNs) to specific regions of interest (ROIs). ...
Traditional CNNs of different types and structures can be modified with this idea into equivalent Targeted Kernel Networks (TKNs), while keeping the network size nearly identical. ...
The localization network that computes warp parameters is a smaller version of the same network, with 3 convolutional layers of the same kernel size (with 16, 32 and 64 kernels respectively) and 3 FC layers ...
arXiv:1806.00523v2
fatcat:moykoklavjhjtdj2hyjvhhekdm
Video SAR Moving Target Tracking Using Joint Kernelized Correlation Filter
2022
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
A moving target tracking framework based on the joint kernelized correlation filter (JKCF) has been developed. ...
By cooperating with the initialization and feature update strategy, the tracking success rate and precision can be improved significantly. ...
The re-initialization mechanism is proposed to cope with the tracking collapse. ...
doi:10.1109/jstars.2022.3146035
fatcat:bt2dzl6rp5dk5iqsewdxbvfiii
Network Deconvolution
[article]
2020
arXiv
pre-print
Convolution is a central operation in Convolutional Neural Networks (CNNs), which applies a kernel to overlapping regions shifted across the image. ...
Filtering with such kernels results in a sparse representation, a desired property that has been missing in the training of neural networks. ...
For a regular convolution layer in a network, we generally have multiple input feature channels and multiple kernels in a layer. ...
arXiv:1905.11926v4
fatcat:r2xsc6f2bbfsxe5l2scenl2zze
Kervolutional Neural Networks
2019
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Extensive experiments show that kervolutional neural networks (KNN) achieve higher accuracy and faster convergence than baseline CNN. ...
Convolutional neural networks (CNNs) have enabled the state-of-the-art performance in many computer vision tasks. ...
For the same purpose, deformable convolutional network [10] adds 2-D learnable offsets to regular grid sampling locations for standard convolution, which enables the learning of affine transforms; while ...
doi:10.1109/cvpr.2019.00012
dblp:conf/cvpr/WangYXY19
fatcat:r67upgtf5je3zhbplwxokvli2i
Application of Deep Convolutional Neural Network under Region Proposal Network in Patent Graphic Recognition and Retrieval
2021
IEEE Access
Combining candidate borders with convolutional neural network (CNN) makes target detection a real end-to-end model structure. ...
Deep convolution allows each convolution kernel to be convolved with a channel feature to achieve channel separation. ...
doi:10.1109/access.2021.3088757
fatcat:qox5sfdpqbexzgxcjepkzfbx34
Kernel Product Neural Networks
2021
IEEE Access
Attention is an important field to explore the importance of each convolutional kernel channel/weight. ...
Seeing this, Kernel Product (KP) technology is proposed to simply get useful nonlinear attention. ...
As shown in Fig. 1 , Att 3×3 is the attention of feature maps of convolutional kernel with size 3. ...
doi:10.1109/access.2021.3135576
fatcat:zdyhxxghgzew5pgfeinkvgzzmm
Kervolutional Neural Networks
[article]
2020
arXiv
pre-print
Extensive experiments show that kervolutional neural networks (KNN) achieve higher accuracy and faster convergence than baseline CNN. ...
Convolutional neural networks (CNNs) have enabled the state-of-the-art performance in many computer vision tasks. ...
For the same purpose, deformable convolutional network [10] adds 2-D learnable offsets to regular grid sampling locations for standard convolution, which enables the learning of affine transforms; while ...
arXiv:1904.03955v2
fatcat:hw552w5vz5d5dkdsr2yodpr3pu
Local Relation Networks for Image Recognition
[article]
2019
arXiv
pre-print
A network built with local relation layers, called the Local Relation Network (LR-Net), is found to provide greater modeling capacity than its counterpart built with regular convolution on large-scale ...
However, the spatial aggregation in convolution is basically a pattern matching process that applies fixed filters which are inefficient at modeling visual elements with varying spatial distributions. ...
One of them is their greater effectiveness in utilizing large kernel neighborhoods compared to regular convolution networks. ...
arXiv:1904.11491v1
fatcat:r4iu5cespnbx3debffoqf6kxee
Convolutional Hough Matching Networks
[article]
2021
arXiv
pre-print
To validate the effect, we develop the neural network with CHM layers that perform convolutional matching in the space of translation and scaling. ...
We cast it into a trainable neural layer with a semi-isotropic high-dimensional kernel, which learns non-rigid matching with a small number of interpretable parameters. ...
[56] improve the framework with offset-aware correlation kernels with attention modules. Jeon et al. ...
arXiv:2103.16831v1
fatcat:lguow6b6ffg2xia4ibbkwud3ga
SBNet: Sparse Blocks Network for Fast Inference
2018
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Conventional deep convolutional neural networks (CNNs) apply convolution operators uniformly in space across all feature maps for hundreds of layers -this incurs a high computational cost for real-time ...
We show that such computation masks can be used to reduce computation in the high-resolution main network. ...
However, unlike human attention which helps us reason visual scenes faster, these attentional network structures do not speed up the inference process since the attention weights are dense across the receptive ...
doi:10.1109/cvpr.2018.00908
dblp:conf/cvpr/RenPYU18
fatcat:yh5nbnetdjen5fobh6ya5t35za
SBNet: Sparse Blocks Network for Fast Inference
[article]
2018
arXiv
pre-print
Conventional deep convolutional neural networks (CNNs) apply convolution operators uniformly in space across all feature maps for hundreds of layers - this incurs a high computational cost for real-time ...
We show that such computation masks can be used to reduce computation in the high-resolution main network. ...
However, unlike human attention which helps us reason visual scenes faster, these attentional network structures do not speed up the inference process since the attention weights are dense across the receptive ...
arXiv:1801.02108v2
fatcat:ubfuy7p4b5gkfkwbskros7difu
An Empirical Study of Spatial Attention Mechanisms in Deep Networks
[article]
2019
arXiv
pre-print
A proper combination of deformable convolution with key content only saliency achieves the best accuracy-efficiency tradeoff in self-attention. ...
dominant Transformer attention as well as the prevalent deformable convolution and dynamic convolution modules. ...
Faster R-CNN [36] with Feature Pyramid Networks (FPN) [27] is chosen as the baseline system. ImageNet [13] pre-trained ResNet-50 is utilized as the backbone. ...
arXiv:1904.05873v1
fatcat:uh5rlstdy5hwzkauuytl6wkv6e
Curvature-Driven Deformable Convolutional Networks for End-To-End Object Detection
2022
Mobile Information Systems
In this work, we present curvature-driven deformable convolutional networks (C-DCNets) that adopt explicit geometric property of the preceding feature maps to enhance the deformability of convolution operation ...
Nevertheless, the spatial support of these networks may be inexact because the offsets are learned implicitly via extra convolutional layer. ...
"Selective Kernel Networks" (SKNets) [4] focus on the adaptive receptive field (RF) size of neurons by introducing the attention mechanisms. ...
doi:10.1155/2022/7556022
fatcat:gskjudfflvadvoahzpewohawpe
Kernel Transformer Networks for Compact Spherical Convolution
[article]
2019
arXiv
pre-print
Ideally, 360 imagery could inherit the deep convolutional neural networks (CNNs) already trained with great success on perspective projection images. ...
In this work, we present the Kernel Transformer Network (KTN). KTNs efficiently transfer convolution kernels from perspective images to the equirectangular projection of 360 images. ...
We also apply L2 regularization with weight 5×10 −4 . ...
arXiv:1812.03115v2
fatcat:iyjgfjhfize2xmhrcwidhgh55i
YOLO-Rip: A modified lightweight network for Rip currents detection
2022
Frontiers in Marine Science
Finally, the SimAM module, which is a parametric-free attention mechanism, was added to optimize the target detection accuracy. ...
Subsequently, we proposed adding a joint dilated convolutional (JDC) module to the lateral connection of the feature pyramid network (FPN) to expand the perceptual field, improve feature information utilization ...
and k the original convolutional kernel size. ...
doi:10.3389/fmars.2022.930478
fatcat:v52wx3a5bnazhoifap2fyo5sem
« Previous
Showing results 1 — 15 out of 9,494 results