Filters








475,174 Hits in 5.3 sec

Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors [article]

Ravid Shwartz-Ziv, Micah Goldblum, Hossein Souri, Sanyam Kapoor, Chen Zhu, Yann LeCun, Andrew Gordon Wilson
2022 arXiv   pre-print
standard pre-training strategies.  ...  Instead, we show that we can learn highly informative posteriors from the source task, through supervised or self-supervised approaches, which then serve as the basis for priors that modify the whole loss  ...  , (4) Bayesian inference with non-learned zero-mean priors, (5) SGD with non-learned zero-mean priors.  ... 
arXiv:2205.10279v1 fatcat:kh5t6rp3bbeslhia5avovcpo5a

Federated Semi-Supervised Learning with Inter-Client Consistency Disjoint Learning [article]

Wonyong Jeong, Jaehong Yoon, Eunho Yang, Sung Ju Hwang
2021 arXiv   pre-print
learning with semi-supervised learning.  ...  FedMatch improves upon naive combinations of federated learning and semi-supervised learning approaches with a new inter-client consistency loss and decomposition of the parameters for disjoint learning  ...  We use a modified AlexNetlike networks Table 1 . 1 Performance Comparison on Streaming Non-IID Dataset (Fashion-MNIST) with 10 clients (F =1.0) Streaming Non-IID Dataset (Fashion-MNIST) Standard Scenario  ... 
arXiv:2006.12097v3 fatcat:znubc5dbsbcqhaift6rjeeftuu

Non-standard situation detection in smart water metering

O. Kainz, E. Karpiel, R. Petija, M. Michalko, F. Jakab
2020 Open Computer Science  
The proposed solution needs to fit the requirements for correct, efficient and real-time detection of non-standard situations in actual water consumption with minimal required consumer intervention to  ...  The final implemented and tested solution evaluates anomalies in water consumption for a given time in specific day and month using machine learning with a semi-supervised approach.  ...  Methods based on semi-supervised learning use the features of supervised and unsupervised learning.  ... 
doi:10.1515/comp-2020-0190 fatcat:ixagnrguknb3zhb27qflfbpp54

Simple Control Baselines for Evaluating Transfer Learning [article]

Andrei Atanov, Shijian Xu, Onur Beker, Andrei Filatov, Amir Zamir
2022 arXiv   pre-print
Transfer learning has witnessed remarkable progress in recent years, for example, with the introduction of augmentation-based contrastive self-supervised learning methods.  ...  To demonstrate how the evaluation standard can be employed, we provide an example empirical study investigating a few basic questions about self-supervised learning.  ...  The proposed standard is not limited to self-supervised learning and is applicable to any transfer learning evaluation.  ... 
arXiv:2202.03365v1 fatcat:snn7svtpczc6zcdayko2uqsyfm

General Supervision via Probabilistic Transformations [article]

Santiago Mazuelas, Aritz Perez
2019 arXiv   pre-print
This paper presents a unifying framework for supervised classification with general ensembles of training data, and proposes the learning methodology of generalized robust risk minimization (GRRM).  ...  Current learning techniques are tailored to one specific scheme and cannot handle general ensembles of training data.  ...  for non-standard supervision.  ... 
arXiv:1901.08552v1 fatcat:xafqsvg54zeu5i4bnvhuwzajpe

Decoupled Adversarial Contrastive Learning for Self-supervised Adversarial Robustness [article]

Chaoning Zhang, Kang Zhang, Chenshuang Zhang, Axi Niu, Jiu Feng, Chang D. Yoo, In So Kweon
2022 arXiv   pre-print
Moreover, our DeACL constitutes a more explainable solution, and its success also bridges the gap with semi-supervised AT for exploiting unlabeled samples for robust representation learning.  ...  Adversarial training (AT) for robust representation learning and self-supervised learning (SSL) for unsupervised representation learning are two active research fields.  ...  At stage 1, we perform standard (i.e. non-robust) SSL to learn instance-wise representation as a target vector.  ... 
arXiv:2207.10899v1 fatcat:wdftdnzxbbc6pjaidgcud4fkqm

Choice of training label matters: how to best use deep learning for quantitative MRI parameter estimation [article]

Sean C. Epstein, Timothy J. P. Bray, Margaret Hall-Craggs, Hui Zhang
2022 arXiv   pre-print
a supervised learning framework.  ...  This result is counterintuitive - incorporating prior knowledge with supervised labels should, in theory, lead to improved accuracy.  ...  Performance summarised by bias & RMSE with respect to groundtruth and standard deviation with respect to noise repetition. Conventional MLE fitting is provided as a non-DNN reference standard.  ... 
arXiv:2205.05587v1 fatcat:4hej63i2n5hizh53gieidd7ezu

Deep learning with mixed supervision for brain tumor segmentation

Pawel Mlynarski, Hervé Delingette, Antonio Criminisi, Nicholas Ayache
2019 Journal of Medical Imaging  
We show that the proposed approach provides a significant improvement in segmentation performance compared to the standard supervised learning.  ...  The idea of our approach is to extend segmentation networks with an additional branch performing image-level classification.  ...  Segmentation performance increases quickly with the first fully-annotated cases, both for the standard supervised learning and the learning with mixed supervision.  ... 
doi:10.1117/1.jmi.6.3.034002 pmid:31423456 pmcid:PMC6689144 fatcat:nndbxyudwrhpfo2ewyf63fz7j4

Deep Learning with Mixed Supervision for Brain Tumor Segmentation [article]

Pawel Mlynarski, Hervé Delingette, Antonio Criminisi, Nicholas Ayache
2018 arXiv   pre-print
We show that the proposed approach provides a significant improvement of segmentation performance compared to the standard supervised learning.  ...  The idea of our approach is to extend segmentation networks with an additional branch performing image-level classification.  ...  Segmentation performance increases quickly with the first fully-annotated cases, both for the standard supervised learning and the learning with mixed supervision.  ... 
arXiv:1812.04571v1 fatcat:hbwfgqkunjdkdbhupxgyksy5fm

Fisher vectors meet Neural Networks: A hybrid classification architecture

Florent Perronnin, Diane Larlus
2015 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
We show experimentally that this hybrid architecture significantly outperforms standard FV systems without incurring the high cost that comes with CNNs.  ...  We propose a hybrid architecture that combines their strengths: the first unsupervised layers rely on the FV while the subsequent fully-connected supervised layers are trained with back-propagation.  ...  We experimented with a number of supervised layers varying from 1 to 4 (i.e. 0 to 3 hidden supervised layers). Learning.  ... 
doi:10.1109/cvpr.2015.7298998 dblp:conf/cvpr/PerronninL15 fatcat:welqbcdzvjgapjeyxlopm6eyiu

pyDML: A Python Library for Distance Metric Learning

Juan-Luis Suárez, Salvador García, Francisco Herrera
2020 Journal of machine learning research  
The package relies on the scipy ecosystem, it is fully compatible with scikit-learn, and is distributed under GPLv3 license.  ...  Distance metric learning can be useful to improve similarity learning algorithms, such as the nearest neighbors classifier, and also has other applications, like dimensionality reduction.  ...  In Python, the metric-learn library (de Vazelhes et al., 2019) provides 9 different DML algorithms, mostly oriented towards weak supervised learning, with the exception of a few classical supervised  ... 
dblp:journals/jmlr/SuarezGH20 fatcat:2afrujuexzctplzn5eezd6673y

Statistical Models for Unsupervised, Semi-Supervised, and Supervised Transliteration Mining

Hassan Sajjad, Helmut Schmid, Alexander Fraser, Hinrich Schütze
2017 Computational Linguistics  
We model transliteration mining as an interpolation of transliteration and non-transliteration sub-models. We evaluate on NEWS 2010 shared task data and on parallel corpora with competitive results.  ...  Our model is efficient, language pair independent and mines transliteration pairs in a consistent fashion in both unsupervised and semi-supervised settings.  ...  Our semi-supervised system learns this as a non-transliteration but it is wrongly annotated as a transliteration in the gold standard.  ... 
doi:10.1162/coli_a_00286 fatcat:vbwx3fgku5bfhns7gpmuzfero4

A self-supervised vowel recognition system

S.K. Pal, A.K. Datta, D. Dutta Majumder
1980 Pattern Recognition  
An oplimum -(Ine for self-supervised learning is found 10 correspond to half of the class variances beyond which the machine loses its efficiency.  ...  The method uses a single patlem training procedure for self-supervised learning and maximum value of fuzzy membership function is the basis of recognition. The alg01.  ...  Yes In Fig. 8 , the results obtained using self-supervised learning algorithms are compared with those obtained with non-adaptive and fully-supervised learning algor ithms.  ... 
doi:10.1016/0031-3203(80)90051-5 fatcat:4s4jnjcthvdrtasf3jzsblcx54

Hierarchical Metric Learning for Optical Remote Sensing Scene Categorization [article]

Akashdeep Goel, Biplab Banerjee, Aleksandra Pizurica
2018 arXiv   pre-print
However, standard metric learning techniques do not incorporate the class interaction information in learning the transformation matrix, which is often considered to be a bottleneck while dealing with  ...  at the non-leaf nodes of the tree.  ...  Broadly, the metric learning algorithms can be supervised, weakly-supervised or semi-supervised in nature.  ... 
arXiv:1708.01494v3 fatcat:7l43aeyfgzhlhcbpmmtwnf5yky

Learning under Distributed Weak Supervision [article]

Martin Rajchl, Matthew C.H. Lee, Franklin Schrans, Alice Davidson, Jonathan Passerat-Palmbach, Giacomo Tarroni, Amir Alansary, Ozan Oktay, Bernhard Kainz, Daniel Rueckert
2016 arXiv   pre-print
The availability of training data for supervision is a frequently encountered bottleneck of medical image analysis methods.  ...  In this paper, we examine the use of a crowdsourcing platform for the distribution of super-pixel weak annotation tasks and collect such annotations from a crowd of non-expert raters.  ...  ACKNOWLEDGEMENTS We gratefully acknowledge the support of NVIDIA Corporation with the donation of a Tesla K40 GPU used for this research.  ... 
arXiv:1606.01100v1 fatcat:rxu4kelptvf2bdrgv35dcsc5ey
« Previous Showing results 1 — 15 out of 475,174 results