6 Hits in 3.5 sec

HS-ResNet: Hierarchical-Split Block on Convolutional Neural Network [article]

Pengcheng Yuan, Shufei Lin, Cheng Cui, Yuning Du, Ruoyu Guo, Dongliang He, Errui Ding, Shumin Han
2020 arXiv   pre-print
This paper addresses representational block named Hierarchical-Split Block, which can be taken as a plug-and-play block to upgrade existing convolutional neural networks, improves model performance significantly  ...  Hierarchical-Split Block contains many hierarchical split and concatenate connections within one single residual block. We find multi-scale features is of great importance for numerous vision tasks.  ...  Meanwhile, we propose a network named HS-ResNet, which consists of several Hierarchical-Split blocks.  ... 
arXiv:2010.07621v1 fatcat:ivc7t4jhpbbd5p3omofzoily2m

CS-HSNet:A Cross-Siamese Change Detection Network Based On Hierarchical-Split Attention

Qingtian Ke, Peng Zhang
2021 IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing  
Finally, we propose a hierarchical-split block for generating multi-scale feature representations in a coarse-to-fine fashion.  ...  residual block.  ...  In particular, our hierarchicalsplit attention module could be plugged into majority convolutional neural networks.  ... 
doi:10.1109/jstars.2021.3113831 fatcat:vfo37a36efgvpjogctq2emldo4

Calibrated Convolution with Gaussian of Difference

Huoxiang Yang, Chao Li, Yongsheng Liang, Wei Liu, Fanyang Meng
2022 Applied Sciences  
Attention mechanisms are widely used for Convolutional Neural Networks (CNNs) when performing various visual tasks.  ...  During experimental tests on various datasets, the method increased the ResNet50-based classification accuracy from 76.40% to 77.87% on the ImageNet dataset, and the tests generally confirmed that CCGD  ...  Res2Net [21] and HS-ResNet [22] build upon ResNet by having a fine-grained hierarchical split and concatenated connections within a single residual block.  ... 
doi:10.3390/app12136570 fatcat:cd4owkn325dw5hjw7v35otdzuq

The HCCL System for the NIST SRE21 [article]

Zhuo Li, Runqiu Xiao, Hangting Chen, Zhenduo Zhao, Zihan Zhang, Wenchao Wang
2022 arXiv   pre-print
We denote the methods that work directly on speech to eliminate the relatively explicit mismatches collectively as data adaptation methods.  ...  Furthermore, some popular back-ends domain adaptation algorithms are deployed on speaker embeddings to alleviate speaker performance degradation caused by the implicit mismatch.  ...  We use our implementation of ECAPA-TDNN with the following parameters: the number of HS-ResNet blocks are set to 4 with dilation values 2,3,4,5 for blocks; the number of channels and embedding size are  ... 
arXiv:2207.04676v1 fatcat:zdh25e2fr5g5blhhvjuppstsdu

SRNet: Scale-aware Representation Learning Network for Dense Crowd Counting

Liangjun Huang, Luning Zhu, Shihui Shen, Qing Zhang, JIANWEI Zhang
2021 IEEE Access  
To decrease the amount of calculation required while maintaining the improved accuracy, HS-ResNet [19] was recently proposed; in this model, the Hierarchical-Split Block(HSB) separation feature is divided  ...  [37] combined a multi-scale convolutional neural network (generator) and an adversarial network (discriminator) to generate a high-quality density map and adapt crowd counting in complex crowd scenes  ... 
doi:10.1109/access.2021.3115963 fatcat:tgb2oyh6rvdd7mzmodrqwrjnoe

Explore Long-Range Context feature for Speaker Verification [article]

Zhuo Li
In this paper, we propose the combination of the Hierarchical-Split block(HS-block) and the Depthwise Separable Self-Attention(DSSA) module to capture richer multi-range context speaker features from a  ...  Specifically, the HS-block splits the feature map and filters into several groups and stacks them in one block, which enlarges the receptive fields(RFs) locally.  ...  . • we introduce a novel Hierarchical-Split block to enlarge the RFs in a single block for SV tasks, named HS-resnet. • we propose an innovative plug-and-play module based on the attention mechanism, DSSA  ... 
doi:10.48550/arxiv.2112.07134 fatcat:dehr6kj2offvljezywekbthj6m