A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
A Two-Stream Mutual Attention Network for Semi-supervised Biomedical Segmentation with Noisy Labels
[article]
2018
arXiv
pre-print
In this paper, we propose a Two-Stream Mutual Attention Network (TSMAN) that weakens the influence of back-propagated gradients caused by incorrect labels, thereby rendering the network robust to unclean ...
Learning-based methods suffer from a deficiency of clean annotations, especially in biomedical segmentation. ...
In this paper, we design a network that is less disturbed Data with Noisy Labels
Two-Stream Mutual Attention Network Hierarchical Distillation Figure 1: The pipeline of our self-training framework. ...
arXiv:1807.11719v3
fatcat:2qvalspasrfevhtbbtue2v52ju
A Two-Stream Mutual Attention Network for Semi-Supervised Biomedical Segmentation with Noisy Labels
2019
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
In this paper, we propose a Two-Stream Mutual Attention Network (TSMAN) that weakens the influence of back-propagated gradients caused by incorrect labels, thereby rendering the network robust to unclean ...
By exchanging multi-level features within two-stream architecture, the effects of noisy labels in each sub-network are reduced by decreasing the noisy gradients. ...
In this paper, we design a network that is less disturbed by noisy labels and propose a Data with Noisy Labels
Two-Stream Mutual Attention Network Hierarchical Distillation Figure 1: The pipeline of ...
doi:10.1609/aaai.v33i01.33014578
fatcat:2izgzjmyivdrlfgwuiuebbtw4y
Uncertainty-Guided Mutual Consistency Learning for Semi-Supervised Medical Image Segmentation
[article]
2021
arXiv
pre-print
Experimental results demonstrate that our method achieves performance gains by leveraging unlabeled data and outperforms existing semi-supervised segmentation methods. ...
Semi-supervised learning has been widely applied to medical image segmentation tasks since it alleviates the heavy burden of acquiring expert-examined annotations and takes the advantage of unlabeled data ...
Zhang, “A two-stream
mutual attention network for semi-supervised biomedical segmentation
V. ...
arXiv:2112.02508v1
fatcat:ofgv42dygvhyxphgh2wbcgdvoy
Medical Image Segmentation with Limited Supervision: A Review of Deep Network Models
[article]
2021
arXiv
pre-print
The labeling costs for medical images are very high, especially in medical image segmentation, which typically requires intensive pixel/voxel-wise labeling. ...
However, due to its intrinsic difficulty, segmentation with limited supervision is challenging and specific model design and/or learning strategies are needed. ...
[303] introduced a two-stream mutual attention network with hierarchical distillation, where the multiple attention layers were used to discover incorrect labels and indicate potentially incorrect gradients ...
arXiv:2103.00429v1
fatcat:p44a5e34sre4nasea5kjvva55e
Medical Image Segmentation with Limited Supervision: A Review of Deep Network Models
2021
IEEE Access
INDEX TERMS Medical image segmentation, semi-supervised segmentation, partially-supervised segmentation, noisy label, sparse annotation. 36828 ...
The labeling costs for medical images are very high, especially in medical image segmentation, which typically requires intensive pixel/voxel-wise labeling. ...
[303] introduced a two-stream mutual attention network with hierarchical distillation, where the multiple attention layers were used to discover incorrect labels and indicate potentially incorrect gradients ...
doi:10.1109/access.2021.3062380
fatcat:r5vsec2yfzcy5nk7wusiftyayu
2020 Index IEEE Transactions on Image Processing Vol. 29
2020
IEEE Transactions on Image Processing
Nie, Y., +, TIP 2020 1465-1478 Compositional Attention Networks With Two-Stream Fusion for Video Question Answering. ...
Nazir, A., +, TIP 2020 7192-7202 One-Pass Multi-Task Networks With Cross-Task Guided Attention for Brain Tumor Segmentation. ...
doi:10.1109/tip.2020.3046056
fatcat:24m6k2elprf2nfmucbjzhvzk3m
2020 Index IEEE/ACM Transactions on Audio, Speech, and Language Processing Vol. 28
2020
IEEE/ACM Transactions on Audio Speech and Language Processing
Azad, A., +, TASLP 2020 592-604
Semi-Supervised Neural Chord Estimation Based on a Variational Auto-
encoder With Latent Chord Labels and Features. ...
., +, TASLP 2020 813-824 Semi-Supervised Neural Chord Estimation Based on a Variational Autoencoder With Latent Chord Labels and Features. ...
T Target tracking Multi-Hypothesis Square-Root Cubature Kalman Particle Filter for Speaker Tracking in Noisy and Reverberant Environments. Zhang, Q., +, TASLP 2020 1183 -1197 ...
doi:10.1109/taslp.2021.3055391
fatcat:7vmstynfqvaprgz6qy3ekinkt4
Gesture Recognition in Robotic Surgery: a Review
2021
IEEE Transactions on Biomedical Engineering
While new strategies for discriminative feature extraction and knowledge transfer, or unsupervised and semi-supervised approaches, can mitigate the need for data and labels, they have not yet been demonstrated ...
, trajectory, segmentation, recognition, parsing. ...
To mitigate data requirements, the recognition problem can be tackled in a semi-supervised or unsupervised manner, where labels are only necessary for model testing. ...
doi:10.1109/tbme.2021.3054828
pmid:33497324
fatcat:si5dcvrvnzc55dse6cst2k5tfi
Graph Neural Networks: Methods, Applications, and Opportunities
[article]
2021
arXiv
pre-print
This article provides a comprehensive survey of graph neural networks (GNNs) in each learning setting: supervised, unsupervised, semi-supervised, and self-supervised learning. ...
Traditionally, handcrafted features for graphs are incapable of providing the necessary inference for various tasks from this complex data representation. ...
Graph-Based Semi-Supervised Learning Semi-supervised learning has been around for many years. ...
arXiv:2108.10733v2
fatcat:j3rfmkiwenebvmfyboasjmx4nu
Deep Learning of Unified Region, Edge, and Contour Models for Automated Image Segmentation
[article]
2020
arXiv
pre-print
that unifies CNNs and active contour models with learnable parameters for fast and robust object delineation, (3) a novel approach for disentangling edge and texture processing in segmentation networks ...
, and (4) a novel few-shot learning model in both supervised settings and semi-supervised settings where synergies between latent and image spaces are leveraged to learn to segment images given limited ...
A.3.1.2 Atlas-Based Segmentation Another variation of reliant segmentation is registration using mutual information with a previously segmented atlas. ...
arXiv:2006.12706v1
fatcat:6jchhrv6zrhlhbpcak6fcbh4a4
2021 Index IEEE Transactions on Image Processing Vol. 30
2021
IEEE Transactions on Image Processing
The Author Index contains the primary entry for each item, listed under the first author's name. ...
., +, TIP 2021 572-587 A Supervised Segmentation Network for Hyperspectral Image Classification. ...
., +, TIP 2021 6335-6348 A Supervised Segmentation Network for Hyperspectral Image Classification. ...
doi:10.1109/tip.2022.3142569
fatcat:z26yhwuecbgrnb2czhwjlf73qu
Survey on Deep Multi-modal Data Analytics: Collaboration, Rivalry and Fusion
[article]
2020
arXiv
pre-print
With the development of web technology, multi-modal or multi-view data has surged as a major stream for big data, where each modal/view encodes individual property of data objects. ...
Recently, deep neural networks have exhibited as a powerful architecture to well capture the nonlinear distribution of high-dimensional multimedia data, so naturally does for multi-modal data. ...
Then, the network is learned in a self-supervised manner with the labels inherited from visible images. ...
arXiv:2006.08159v1
fatcat:g4467zmutndglmy35n3eyfwxku
Unsupervised Domain Adaptation for Semantic Image Segmentation: a Comprehensive Survey
[article]
2021
arXiv
pre-print
Semantic segmentation plays a fundamental role in a broad variety of computer vision applications, providing key information for the global understanding of an image. ...
We present the most important semantic segmentation methods; we provide a comprehensive survey on domain adaptation techniques for semantic segmentation; we unveil newer trends such as multi-domain learning ...
The two networks
image-level label distribution to guide the pixel-level target share the same architecture with an embedded attention module.
segmentation. ...
arXiv:2112.03241v1
fatcat:uzlehddvuvfwzf4dfbjimja45e
Learning Neural Textual Representations for Citation Recommendation
2021
2020 25th International Conference on Pattern Recognition (ICPR)
H.; Leung, Howard
858
A Two-Stream Recurrent Network for Skeleton-Based Human
Interaction Recognition
DAY 4 -Jan 15, 2021
Orozco-Alzate, Mauricio; Bicego,
Manuele
861
A Cheaper Rectified-Nearest-Feature-Line-Segment ...
with Scarce Labelled Data: Semi-Supervised Deep Learning
with Mix Match for Covid-19 Detection Using Chest X-Ray Images
DAY 2 -Jan 13, 2021
Nguyen, Phuc; lathuiliere,
Stéphane; Ricci, Elisa
1470 ...
doi:10.1109/icpr48806.2021.9412725
fatcat:3vge2tpd2zf7jcv5btcixnaikm
Revise-Net: Exploiting Reverse Attention Mechanism for Salient Object Detection
2021
Remote Sensing
Finally, multiple reverse attention modules at varying scales are cascaded between the two networks to guide the prediction module by employing the intermediate segmentation maps generated at each downsampling ...
The proposed Revise-Net model is divided into three parts: (a) the prediction module, (b) a residual enhancement module, and (c) reverse attention modules. ...
A Mutual Learning Method for Salient Object Detection With Intertwined
Multi-Supervision. ...
doi:10.3390/rs13234941
fatcat:4jno22evrvehbm4zznwfi43yp4
« Previous
Showing results 1 — 15 out of 361 results