Filters








5,958 Hits in 6.6 sec

Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data [article]

Colin Wei, Kendrick Shen, Yining Chen, Tengyu Ma
2022 arXiv   pre-print
This work provides a unified theoretical analysis of self-training with deep networks for semi-supervised learning, unsupervised domain adaptation, and unsupervised learning.  ...  Self-training algorithms, which train a model to fit pseudolabels predicted by another previously-learned model, have been very successful for learning with unlabeled data using neural networks.  ...  We train a classifier to fit pseudolabels while regularizing adversarial robustness on the target domain using the VAT loss of (Miyato et al., 2018) , obtaining the following loss over classifier F :  ... 
arXiv:2010.03622v5 fatcat:smwna6sjgnfi5a6lp32nhzatny

Medical Image Segmentation with Limited Supervision: A Review of Deep Network Models [article]

Jialin Peng, Ye Wang
2021 arXiv   pre-print
Despite the remarkable performance of deep learning methods on various tasks, most cutting-edge models rely heavily on large-scale annotated training examples, which are often unavailable for clinical  ...  application of deep learning models in medical image segmentation.  ...  They first fine-tuned the unsupervised pretrained model and then distilled the model into a smaller one with the unlabeled data. More constructive theoretical analysis is needed.  ... 
arXiv:2103.00429v1 fatcat:p44a5e34sre4nasea5kjvva55e

Medical Image Segmentation with Limited Supervision: A Review of Deep Network Models

Jialin Peng, Ye Wang
2021 IEEE Access  
Despite the remarkable performance of deep learning methods on various tasks, most cutting-edge models rely heavily on large-scale annotated training examples, which are often unavailable for clinical  ...  application of deep learning models in medical image segmentation.  ...  [57] investigated the effectiveness of pre-trained deep CNNs with sufficient fine-tuning compared to training a deep network from scratch on four different medical imaging applications.  ... 
doi:10.1109/access.2021.3062380 fatcat:r5vsec2yfzcy5nk7wusiftyayu

Co-Training for Visual Object Recognition Based on Self-Supervised Models Using a Cross-Entropy Regularization

Gabriel Díaz, Billy Peralta, Luis Caro, Orietta Nicolis
2021 Entropy  
In this work, we propose a co-training model for visual object recognition using deep neural networks by adding layers of self-supervised neural networks as intermediate inputs to the views, where the  ...  However, deep learning techniques require a large amount of labeled data, which is usually expensive to obtain.  ...  Data Availability Statement: Data sharing not applicable. Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/e23040423 pmid:33916017 fatcat:ajfzk2s2b5hbpopngxh6igngsi

Modeling surface appearance from a single photograph using self-augmented convolutional neural networks

Xiao Li, Yue Dong, Pieter Peers, Xin Tong
2017 ACM Transactions on Graphics  
To reduce the amount of required labeled training data, we propose to leverage the appearance information embedded in unlabeled images of spatially varying materials to self-augment the training process  ...  We demonstrate the efficacy of the proposed network structure on spatially varying wood, metals, and plastics, as well as thoroughly validate the effectiveness of the self-augmentation training process  ...  Training with labeled training data pairs Self-augmentation with unlabeled training data Fig. 4 . Summary of the Self-augmentation Training Process.  ... 
doi:10.1145/3072959.3073641 fatcat:qfpuw4jmffdvdfncjny4436ntm

Statistical-Mechanical Analysis of Pre-training and Fine Tuning in Deep Learning

Masayuki Ohzeki
2015 Journal of the Physical Society of Japan  
The self-organized classifier is then supplied with small amounts of labelled data, as in deep learning.  ...  Although we employ a simple single-layer perceptron model, rather than directly analyzing a multi-layer neural network, we find a nontrivial phase transition that is dependent on the number of unlabelled  ...  Foundation of Informational Science Advancement.  ... 
doi:10.7566/jpsj.84.034003 fatcat:u2siv6iuojfjtlfcdqvxfbhzeu

Semi-Supervised Self-Growing Generative Adversarial Networks for Image Recognition [article]

Haoqian Wang, Zhiwei Xu, Jun Xu, Wangpeng An, Lei Zhang, Qionghai Dai
2019 arXiv   pre-print
of deep neural networks.  ...  By using the training data with only 4% labeled facial attributes, the SGGAN approach can achieve comparable accuracy with leading supervised deep learning methods with all labeled facial attributes.  ...  Self-training [49] is one of the earliest semi-supervised learning strategy using unlabeled data to improve the training of recognition systems.  ... 
arXiv:1908.03850v1 fatcat:pwolhiwzlrhf3igbf6p5hx6qfa

Learning How to Self-Learn: Enhancing Self-Training Using Neural Reinforcement Learning [article]

Chenhua Chen, Yue Zhang
2018 arXiv   pre-print
Traditional self-training methods depend on heuristics such as model confidence for instance selection, the manual adjustment of which can be expensive.  ...  Based on neural network representation of sentences, our model automatically learns an optimal policy for instance selection.  ...  Self Training for NER Tagging. In this scenario, we have a training set with gold labels, and a large number of unlabeled data.  ... 
arXiv:1804.05734v1 fatcat:xfik64r4i5fcbgn5uw4yhcnqnq

Rethinking the Value of Labels for Improving Class-Imbalanced Learning [article]

Yuzhe Yang, Zhi Xu
2020 arXiv   pre-print
Real-world data often exhibits long-tailed distributions with heavy class imbalance, posing great challenges for deep recognition models.  ...  Specifically, we confirm that (1) positively, imbalanced labels are valuable: given more unlabeled data, the original labels can be leveraged with the extra data to reduce label bias in a semi-supervised  ...  After the first stage of learning with self-supervision, we can then perform any standard training approach to learn the final model initialized by the pre-trained network.  ... 
arXiv:2006.07529v2 fatcat:m7vixcc6dzbvnh7oawwihff5u4

Generative Adversarial Active Learning [article]

Jia-Jie Zhu, José Bento
2017 arXiv   pre-print
We propose a new active learning by query synthesis approach using Generative Adversarial Networks (GAN).  ...  We generate queries according to the uncertainty principle, but our idea can work with other active learning principles.  ...  The results of this work are enough to inspire future studies of deep generative models in active learning.  ... 
arXiv:1702.07956v5 fatcat:5kgjifzsorclrgc2o2x772rvty

A Deep Learning Approach for Network Intrusion Detection System

Ahmad Javaid, Quamar Niyaz, Weiqing Sun, Mansoor Alam
2016 Proceedings of the 9th EAI International Conference on Bio-inspired Information and Communications Technologies (formerly BIONETICS)  
We use Self-taught Learning (STL), a deep learning based technique, on NSL-KDD -a benchmark dataset for network intrusion.  ...  We present the performance of our approach and compare it with a few previous work. Compared metrics include accuracy, precision, recall, and f-measure values.  ...  Evaluation based on Training data We applied 10-fold cross-validation on the training data to evaluate the classification accuracy of self-taught learning (STL) for 2-class, 5-class, and 23-class.  ... 
doi:10.4108/eai.3-12-2015.2262516 dblp:journals/sesa/JavaidNSA16 fatcat:v5mkb4ttjrbehme6s6dajwsi4u

Semi-supervised Deep Learning for Image Classification with Distribution Mismatch: A Survey [article]

Saul Calderon-Ramirez, Shengxiang Yang, David Elizondo
2022 arXiv   pre-print
Deep learning models rely on the abundance of labelled observations to train a prospective model.  ...  In a semi-supervised setting, unlabelled data is used to improve the levels of accuracy and generalization of a model with small labelled datasets.  ...  Pseudo-label semi-supervised deep learning In Pseudo-label Semi-Supervised deep learning (PLT-SSDL) or also known as self-training, self-teaching or bootstrapping, pseudo-labels are estimated for unlabelled  ... 
arXiv:2203.00190v3 fatcat:gtdpu5kmmfh67ceseq67eih5ae

Robust Self-Ensembling Network for Hyperspectral Image Classification [article]

Yonghao Xu, Bo Du, Liangpei Zhang
2021 arXiv   pre-print
unlabeled data in HSI to assist the network training.  ...  With the constraint of both the supervised loss from the labeled data and the unsupervised loss from the unlabeled data, the base network and the ensemble network can learn from each other, achieving the  ...  to utilize the unlabeled data in HSI to assist the training of deep networks with very limited labeled samples. 2) To make self-ensembling learning more efficient, a simple but effective spectral-spatial  ... 
arXiv:2104.03765v1 fatcat:nk2oc7tmorhgzasrh7qzkummtu

Information-Theoretic Active SOM for Improving Generalization Performance

Ryotaro Kamimura
2016 International Journal of Advanced Research in Artificial Intelligence (IJARAI)  
In this paper, we introduce a new type of information-theoretic method called "information-theoretic active SOM", based on the self-organizing maps (SOM) for training multi-layered neural networks.  ...  all input data with and without labels.  ...  The present method suggests that the informationtheoretic SOM can be used to train neural networks with information in unlabeled data.  ... 
doi:10.14569/ijarai.2016.050804 fatcat:3qdenn7bcbcz3cyokclfkuoxum

Domain Adaptation with Randomized Expectation Maximization [article]

Twan van Laarhoven, Elena Marchiori
2018 arXiv   pre-print
The potential limitations of this assumption are alleviated by the flexibility of the method, which can directly incorporate deep features extracted from a pre-trained deep neural network.  ...  Despite their success, state of the art methods based on this approach are either involved or unable to directly scale to data with many features.  ...  Acknowledgements We would like to thank Baochen Sun for his prompt reply to our emails, in particular for providing interesting information about the application of CORAL.  ... 
arXiv:1803.07634v1 fatcat:e744w4t4nfcuzewpuinsevkru4
« Previous Showing results 1 — 15 out of 5,958 results