A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Efficient Cross-Validation for Semi-Supervised Learning
[article]
2019
arXiv
pre-print
Manifold regularization, such as laplacian regularized least squares (LapRLS) and laplacian support vector machine (LapSVM), has been widely used in semi-supervised learning, and its performance greatly ...
Cross-validation (CV) is the most popular approach for selecting the optimal hyper-parameters, but it has high complexity due to multiple times of learner training. ...
This is an interesting attempt to apply the theoretical notion of BIF for practical model selection in semi-supervised learning. ...
arXiv:1902.04768v1
fatcat:h2cge7op6zcennf7i3rqqec5pu
A Studious Approach to Semi-Supervised Learning
[article]
2021
arXiv
pre-print
The problem of learning from few labeled examples while using large amounts of unlabeled data has been approached by various semi-supervised methods. ...
This brings forward the potential of distillation as an effective solution to enhance performance in semi-supervised computer vision tasks while maintaining deployability. ...
This paper is an empirical study of distillation based semi-supervised learning to overcome overfitting, a common problem in semi-supervised setup and bettering performance when limited with small deployable ...
arXiv:2109.08924v1
fatcat:qzyigrlvlrdzrnbttdrnvt4oo4
Semi-supervised Learning for Phenotyping Tasks
2015
AMIA Annual Symposium Proceedings
Semi-supervised learning takes advantage of both scarce labeled and plentiful unlabeled data. ...
In this work, we study a family of semi-supervised learning algorithms based on Expectation Maximization (EM) in the context of several phenotyping tasks. ...
For ! ! !!!" (Figure 1 Learning curves for ! set by cross!validation Learning curves for ! set by cross! ...
pmid:26958183
pmcid:PMC4765699
fatcat:qrphiepnyjgcpffzemqmnfnemu
Learning Representational Invariances for Data-Efficient Action Recognition
[article]
2022
arXiv
pre-print
When integrated with existing semi-supervised learning frameworks, we show that our data augmentation strategy leads to promising performance on the Kinetics-100/400, Mini-Something-v2, UCF-101, and HMDB ...
We also validate our data augmentation strategy in the fully supervised setting and demonstrate improved performance. ...
semi-supervised learning. ...
arXiv:2103.16565v2
fatcat:2kz2f6yc3jb43gpdfg6ao7jo6a
Spectroscopy Approaches for Food Safety Applications: Improving Data Efficiency Using Active Learning and Semi-Supervised Learning
[article]
2022
arXiv
pre-print
Specifically, we leverage Active Learning (AL) and Semi-Supervised Learning (SSL) and investigate four approaches: baseline passive learning, AL, SSL, and a hybrid of AL and SSL. ...
In this paper, we explore different approaches of data annotation and model training to improve data efficiency for ML applications. ...
We adopt 5-fold cross-validation for both datasets. ...
arXiv:2110.03765v4
fatcat:z3nkgi3airgz7eh36ogtqfqdxi
Semi-Supervised Histology Classification using Deep Multiple Instance Learning and Contrastive Predictive Coding
[article]
2019
arXiv
pre-print
We propose to overcome such limitations with a two-stage semi-supervised approach that combines the power of data-efficient self-supervised feature learning via contrastive predictive coding (CPC) and ...
We apply our two-stage CPC + MIL semi-supervised pipeline to the binary classification of breast cancer histology images. ...
We propose a two-stage semi-supervised approach that attempts to help mitigate both of these key challenges by combining MIL with data-efficient self-supervised learning via contrastive predictive coding ...
arXiv:1910.10825v3
fatcat:vm4m2w6oqfhdvearkxhtf6kg74
Self Semi Supervised Neural Architecture Search for Semantic Segmentation
[article]
2022
arXiv
pre-print
In this paper, we propose a Neural Architecture Search strategy based on self supervision and semi-supervised learning for the task of semantic segmentation. ...
the structure of the unlabeled data with semi-supervised learning. ...
Our approach is based on self supervision and semi-supervised learning for semantic segmentation. ...
arXiv:2201.12646v2
fatcat:dqasniazhjhclm2kzjx6vwfimy
Dual-Teacher: Integrating Intra-domain and Inter-domain Teachers for Annotation-efficient Cardiac Segmentation
[article]
2020
arXiv
pre-print
semi-supervised learning and domain adaptation methods with a large margin. ...
In this paper, we aim to investigate the feasibility of simultaneously leveraging abundant unlabeled data and well-established cross-modality data for annotation-efficient medical image segmentation. ...
Conclusion We present a novel annotation-efficient semi-supervised domain adaptation framework for multi-modality cardiac segmentation. ...
arXiv:2007.06279v1
fatcat:rtungnerpvhs5bav2tkymvvdsq
Supervision Accelerates Pre-training in Contrastive Semi-Supervised Learning of Visual Representations
[article]
2020
arXiv
pre-print
We investigate a strategy for improving the efficiency of contrastive learning of visual representations by leveraging a small amount of supervised information during pre-training. ...
On ImageNet, we find that SuNCEt can be used to match the semi-supervised learning accuracy of previous contrastive approaches while using less than half the amount of pre-training and compute. ...
the current state-of-the-art for contrastive semi-supervised learning of visual representations. ...
arXiv:2006.10803v2
fatcat:vkmfozhlpjhebmicezusr7oy4q
A Collective Learning Approach for Semi-Supervised Data Classification
2018
Pamukkale University Journal of Engineering Sciences
Results are shown in tables in terms of testing accuracies by use of ten fold cross validation. ...
In this paper we suggest a collective method for solving semi-supervised data classification problems. Examples in R 1 presented and solved to gain a clear understanding. ...
Algorithm 2
initilization
technique
+Supervised
classification
technique
LIVER 10-fold
cross validation
result (%)
WBCD 10-fold
cross
validation
result (%)
HEART 10-fold
cross validation ...
doi:10.5505/pajes.2017.44341
fatcat:e6k5anvgffc67fdsnodyv7t334
Curriculum semi-supervised segmentation
[article]
2019
arXiv
pre-print
This study investigates a curriculum-style strategy for semi-supervised CNN segmentation, which devises a regression network to learn image-level information such as the size of a target region. ...
We evaluated our proposed strategy for left ventricle segmentation in magnetic resonance images (MRI), and compared it to standard proposal-based semi-supervision strategies. ...
The lack of large annotated datasets has driven research in deep segmentation models that rely on reduced supervision for training, such as weakly [11, 9, 17, 8] or semi-supervised [1, 19] learning ...
arXiv:1904.05236v2
fatcat:uccahmo3bzfhrfzh2ifvh3ipce
Uncertainty-Guided Mutual Consistency Learning for Semi-Supervised Medical Image Segmentation
[article]
2021
arXiv
pre-print
for self-ensembling and cross-task consistency learning from task-level regularization to exploit geometric shape information. ...
Semi-supervised learning has been widely applied to medical image segmentation tasks since it alleviates the heavy burden of acquiring expert-examined annotations and takes the advantage of unlabeled data ...
Zhang, “Dual-task mutual learning for semi-supervised
[6] to further improve and validate the performance of semi- medical image segmentation,” in Pattern Recognition and Computer
supervised ...
arXiv:2112.02508v1
fatcat:ofgv42dygvhyxphgh2wbcgdvoy
Leave Zero Out: Towards a No-Cross-Validation Approach for Model Selection
[article]
2020
arXiv
pre-print
As the main workhorse for model selection, Cross Validation (CV) has achieved an empirical success due to its simplicity and intuitiveness. ...
In addition, the proposed validation approach is suitable for a wide range of learning settings due to the independence of both augmentation and out-of-sample estimation on learning process. ...
To validate the efficiency and the effectiveness of LZO, we conduct multiple experiments on 20 supervised datasets and 6 semi-supervised data-sets. ...
arXiv:2012.13309v2
fatcat:5prgibbgrzajncr6oocona6ose
Clustering Analysis for Semi-supervised Learning Improves Classification Performance of Digital Pathology
[chapter]
2015
Lecture Notes in Computer Science
Their cross-validated classification performances were compared with each other using the area under the ROC curve measure. ...
Semi-supervised learning methods are able to learn reliable models from small number of labeled instances and large quantities of unlabeled data. ...
Fig. 2 . 2 Mean area under the ROC curve (mAUC) comparison of 8 fold subject-wise cross-validation (n = 2302 image patches) for supervised and semi-supervised SVM methods using different percentages of ...
doi:10.1007/978-3-319-24888-2_32
fatcat:4i3iktoxmbaipdeqn5negba32i
Integrated graph-based semi-supervised multiple/single instance learning framework for image annotation
2008
Proceeding of the 16th ACM international conference on Multimedia - MM '08
More specifically, we propose an integrated graph-based semi-supervised learning framework to utilize these two types of representations simultaneously, and explore an effective and computationally efficient ...
Recently, many learning methods based on multiple-instance (local) or single-instance (global) representations of images have been proposed for image annotation. ...
To the best of our knowledge, existing learning based image annotation methods, including supervised MI learning, semi-supervised SI learning and semi-supervised MI learning, such as the aforementioned ...
doi:10.1145/1459359.1459446
dblp:conf/mm/TangLQC08
fatcat:ltugr5vuafa6hj7tnor375ooom
« Previous
Showing results 1 — 15 out of 57,064 results