Dual-branch Attention-In-Attention Transformer for single-channel speech enhancement
[article]
Guochen Yu, Andong Li, Chengshi Zheng, Yinuo Guo, Yutian Wang, Hui Wang
2022
arXiv
pre-print
Specifically, the proposed attention-in-attention transformer consists of adaptive temporal-frequency attention transformer blocks and an adaptive hierarchical attention module, aiming to capture long-term ...
Motivated by that, we propose a dual-branch attention-in-attention transformer dubbed DB-AIAT to handle both coarse- and fine-grained regions of the spectrum in parallel. ...
In each branch, an improved transformer [6] is employed, which is comprised of a multi-head selfattention (MHSA) module and a GRU-based feed-forward network, followed by residual connections and LN. ...
arXiv:2110.06467v5
fatcat:abrljaopwnctpm3dpywtsbgevi