A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Transfer Learning Improves Supervised Image Segmentation Across Imaging Protocols
2015
IEEE Transactions on Medical Imaging
Transfer learning improves supervised image segmentation across imaging protocols van Opbroek, Annegreet; Ikram, M. ...
Transfer learning improves supervised image segmentation across imaging protocols. IEEE transactions on medical imaging, 34(5), 1018-1030. ...
The purpose of our study was to investigate whether transfer-learning techniques can improve upon regular supervised segmentation of images obtained with different scan protocols. ...
doi:10.1109/tmi.2014.2366792
pmid:25376036
fatcat:nhngb27plrd3tbgte337p6c2ay
Using Rule-Based Labels for Weak Supervised Learning: A ChemNet for Transferable Chemical Property Prediction
[article]
2018
arXiv
pre-print
DNN models that were trained using conventional supervised learning. ...
In this work, we develop an approach of using rule-based knowledge for training ChemNet, a transferable and generalizable deep neural network for chemical property prediction that learns in a weak-supervised ...
Nathan Baker for helpful discussions. is work is supported by the following PNNL LDRD programs: Pauling Postdoctoral Fellowship and Deep Learning for Scienti c Discovery Agile Investment. ...
arXiv:1712.02734v2
fatcat:itrjobfzkzexnlw5nqwxjqmzk4
Efficient Visual Pretraining with Contrastive Detection
[article]
2021
arXiv
pre-print
Finally, our objective seamlessly handles pretraining on more complex images such as those in COCO, closing the gap with supervised transfer learning from COCO to PASCAL. ...
Self-supervised pretraining has been shown to yield powerful representations for transfer learning. ...
Results: larger models In Table 2 we compare to prior works on self-supervised learning which transfer to COCO. ...
arXiv:2103.10957v2
fatcat:xdpkl5tr6ff3xf2tm5bj42sd6u
CYBORGS: Contrastively Bootstrapping Object Representations by Grounding in Segmentation
[article]
2022
arXiv
pre-print
Experiments show our representations transfer robustly to downstream tasks in classification, detection and segmentation. ...
Many recent approaches in contrastive learning have worked to close the gap between pretraining on iconic images like ImageNet and pretraining on complex scenes like COCO. ...
Main Results: Representation Learning We follow standard downstream transfer-based protocols to evaluate the strength of representations learned by cyborgs. ...
arXiv:2203.09343v1
fatcat:w55qgtxkcfb5hdnzlljmfbuydy
Learning Transferrable Knowledge for Semantic Segmentation with Deep Convolutional Neural Network
[article]
2015
arXiv
pre-print
Contrary to existing weakly-supervised approaches, our algorithm exploits auxiliary segmentation annotations available for different categories to guide segmentations on images with only image-level class ...
To make the segmentation knowledge transferrable across categories, we design a decoupled encoder-decoder architecture with attention model. ...
is appropriate to transfer the segmentation knowledge across categories. • The proposed algorithm achieved substantial performance improvement over existing weakly-supervised approaches with segmentation ...
arXiv:1512.07928v1
fatcat:p6kdgj7gbvdrrgxzrdhss275k4
Advancing Medical Imaging Informatics by Deep Learning-Based Domain Adaptation
2020
IMIA Yearbook of Medical Informatics
DA is a type of transfer learning (TL) that can improve the performance of models when applied to multiple different datasets. ...
, image modality, and learning scenarios. ...
., [77] have applied a domain discriminator to MR images from different scanners and imaging protocols to improve the brain lesion segmentation performance. ...
doi:10.1055/s-0040-1702009
pmid:32823306
fatcat:gtlhoh6m3fh4hcumfzdlpdohr4
Learning Transferrable Knowledge for Semantic Segmentation with Deep Convolutional Neural Network
2016
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Contrary to existing weakly-supervised approaches, our algorithm exploits auxiliary segmentation annotations available for different categories to guide segmentations on images with only image-level class ...
To make segmentation knowledge transferrable across categories, we design a decoupled encoder-decoder architecture with attention model. ...
is appropriate to transfer the segmentation knowledge across categories. • The proposed algorithm achieves substantial performance improvement over existing weakly-supervised approaches by exploiting ...
doi:10.1109/cvpr.2016.349
dblp:conf/cvpr/HongOLH16
fatcat:obemojsyvvfflodeaxd3ptfyse
Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals
[article]
2021
arXiv
pre-print
Being able to learn dense semantic representations of images without supervision is an important problem in computer vision. ...
Second, our representations can improve over strong baselines when transferred to new datasets, e.g. COCO and DAVIS. The code is available. ...
Interestingly, our representations transfer well across various datasets. ...
arXiv:2102.06191v3
fatcat:6kbodv6wbjblzorfimrs4hz5v4
Learning from 2D: Contrastive Pixel-to-Point Knowledge Transfer for 3D Pretraining
[article]
2021
arXiv
pre-print
In this paper, we present a novel 3D pretraining method by leveraging 2D networks learned from rich 2D datasets. ...
Our intensive experiments show that the 3D models pretrained with 2D knowledge boost the performances of 3D networks across various real-world 3D downstream tasks. ...
Following the pretrain-finetune protocol in [64] , we show that PPKT consistently boosts overall downstream performance across multiple real-world 3D tasks and datasets. ...
arXiv:2104.04687v3
fatcat:zfmpvxsv6vevlfcualoorajmkm
Object discovery and representation networks
[article]
2022
arXiv
pre-print
Instead, we propose a self-supervised learning paradigm that discovers this image structure by itself. ...
The resulting learning paradigm is simpler, less brittle, and more general, and achieves state-of-the-art transfer learning results for object detection and instance segmentation on COCO, and semantic ...
Contrastive learning departed from this tradition by radically simplifying the self-supervised protocol, in that the pretext task is specified by the data itself: representations must learn to distinguish ...
arXiv:2203.08777v2
fatcat:vkem4cnhpnhf3bqebzusk4o45m
Self-Supervised Learning from Unlabeled Fundus Photographs Improves Segmentation of the Retina
[article]
2021
arXiv
pre-print
To overcome these limitations, we utilized contrastive self-supervised learning to exploit the large variety of unlabeled fundus images in the publicly available EyePACS dataset. ...
Automated segmentation of fundus photographs would improve the quality, capacity, and cost-effectiveness of eye care screening programs. ...
We repeated the experiments several times (N=4 for image segmentation, N=12 for domain transfer) and paired the results from matching training/validation splits. ...
arXiv:2108.02798v1
fatcat:fcmedcintnechhbxybt44lhfhy
What makes instance discrimination good for transfer learning?
[article]
2021
arXiv
pre-print
It comes as a surprise that image annotations would be better left unused for transfer learning. ...
In this work, we investigate the following problems: What makes instance discrimination pretraining good for transfer learning? What knowledge is actually learned and transferred from these models? ...
This is in contrast to traditional supervised learning, where ImageNet performance is improved and its transfer performance is compromised. ...
arXiv:2006.06606v2
fatcat:gyleg63lbzfqpbkb2b3aryz63u
Domain Adaptive Relational Reasoning for 3D Multi-Organ Segmentation
[article]
2020
arXiv
pre-print
as a latent variable to transfer the knowledge shared across multiple domains. ...
To guarantee the transferability of the learned spatial relationship to multiple domains, we additionally introduce two schemes: 1) Employing a super-resolution network also jointly trained with the segmentation ...
Such relational configuration is deemed as weak cues for segmentation task, which is easier to learn, and thus better in transfer [28] . ...
arXiv:2005.09120v2
fatcat:uaoaehddgffhlh3btpresyrb5y
Exploring Set Similarity for Dense Self-supervised Representation Learning
[article]
2022
arXiv
pre-print
Meanwhile, these attentional features can keep the coherence of the same image across different views to alleviate semantic inconsistency. ...
We generalize pixel-wise similarity learning to set-wise one to improve the robustness because sets contain more semantic and structure information. ...
Conclusion In this paper, we propose a simple but effective dense self-supervised representation learning framework, SetSim, by exploring set similarity across views to improve the robustness. ...
arXiv:2107.08712v2
fatcat:jqtpzpseird2hbldlpjj4x7txa
VirTex: Learning Visual Representations from Textual Annotations
[article]
2021
arXiv
pre-print
We train convolutional networks from scratch on COCO Captions, and transfer them to downstream recognition tasks including image classification, object detection, and instance segmentation. ...
On all tasks, VirTex yields features that match or exceed those learned on ImageNet -- supervised or unsupervised -- despite using up to ten times fewer images. ...
model zoo; Georgia Gkioxari for suggesting the Instance Segmentation pretraining task ablation; and Stefan Lee for suggestions on figure aesthetics. ...
arXiv:2006.06666v3
fatcat:ifck6jbayvc4hcrznk6icqghga
« Previous
Showing results 1 — 15 out of 19,932 results