1,423 Hits in 5.5 sec

Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification [article]

Nan Lu, Shida Lei, Gang Niu, Issei Sato, Masashi Sugiyama
2021 arXiv   pre-print
classifier from unlabeled (U) datasets.  ...  In this paper, we propose a new approach for binary classification from m U-sets for m≥2.  ...  U m classification via Surrogate Set Classification In this section, we propose a new ERM-based method for learning from multiple U sets via a surrogate set classification task and analyze it theoretically  ... 
arXiv:2102.00678v2 fatcat:stb44ewjdfcori2tenylrorlt4

Semi-Supervised Crowd Counting via Self-Training on Surrogate Tasks [article]

Yan Liu, Lingqiao Liu, Peng Wang, Pingping Zhang, Yinjie Lei
2020 arXiv   pre-print
density map regression task as the surrogate prediction target; (2) the surrogate target predictors are learned from both labeled and unlabeled data by utilizing a proposed self-training scheme which  ...  Specifically, we proposed a novel semi-supervised crowd counting method which is built upon two innovative components: (1) a set of inter-related binary segmentation tasks are derived from the original  ...  Essentially, those surrogate tasks are binary segmentation tasks and we build multiple segmentation predictors for each of them.  ... 
arXiv:2007.03207v2 fatcat:fgah5xk37fdk3auj62gkpo4zqq

Convex Formulation of Multiple Instance Learning from Positive and Unlabeled Bags [article]

Han Bao, Tomoya Sakai, Issei Sato, Masashi Sugiyama
2018 arXiv   pre-print
Multiple instance learning (MIL) is a variation of traditional supervised learning problems where data (referred to as bags) are composed of sub-elements (referred to as instances) and only bag labels  ...  However, it is often difficult to obtain an enough number of labeled bags in practical situations, while many unlabeled bags are available.  ...  Learning from Positive and Unlabeled Data We formulate a binary classification problem from positive and unlabeled instances and review existing methods.  ... 
arXiv:1704.06767v3 fatcat:q6lmjvhuj5e35dio64ivpiqtli

Scalable Evaluation and Improvement of Document Set Expansion via Neural Positive-Unlabeled Learning [article]

Alon Jacovi, Gang Niu, Yoav Goldberg, Masashi Sugiyama
2021 arXiv   pre-print
We propose to extend the IR approach by treating the problem as an instance of positive-unlabeled (PU) learning -- i.e., learning binary classifiers from only positive and unlabeled data, where the positive  ...  We consider the situation in which a user has collected a small set of documents on a cohesive topic, and they want to retrieve additional documents on this topic from a large collection.  ...  To this end, we look to positive-unlabeled (PU) learning [6] : a binary classification setting where a classifier is trained based on only positive and unlabeled data.  ... 
arXiv:1910.13339v2 fatcat:5bxq3pjb6bgzteuhk6xf6vw77i

Partly Supervised Multitask Learning [article]

Abdullah-Al-Zubaer Imran, Chao Huang, Hui Tang, Wei Fan, Yuan Xiao, Dingjun Hao, Zhen Qian, Demetri Terzopoulos
2020 arXiv   pre-print
two important tasks in medical imaging, segmentation and diagnostic classification.  ...  Moreover, optimizing a model for multiple tasks can provide better generalizability than single-task learning.  ...  Chest Dataset: For our experiments, we make use of the Montgomery County chest X-ray set, the Shenzhen chest X-ray set available from the NIH (Jaeger et al., 2014) , and the dataset available from the  ... 
arXiv:2005.02523v1 fatcat:jzj7irvoy5ahfovfqkymbpkilq

Pairwise Supervision Can Provably Elicit a Decision Boundary [article]

Han Bao, Takuya Shimada, Liyuan Xu, Issei Sato, Masashi Sugiyama
2022 arXiv   pre-print
In this paper, we reveal that a product-type formulation of similarity learning is strongly related to an objective of binary classification.  ...  Consequently, our results elucidate that similarity learning is capable of solving binary classification by directly eliciting a decision boundary.  ...  classification via the surrogate risk minimization.  ... 
arXiv:2006.06207v2 fatcat:rcbktgbhcfhkdnca77uuike4xu

Federated Learning from Only Unlabeled Data with Class-Conditional-Sharing Clients [article]

Nan Lu, Zhao Wang, Xiaoxiao Li, Gang Niu, Qi Dou, Masashi Sugiyama
2022 arXiv   pre-print
wanted model is recovered from the modified model.  ...  We propose federation of unsupervised learning (FedUL), where the unlabeled data are transformed into surrogate labeled data for each of the clients, a modified model is trained by supervised FL, and the  ...  The former learns a multiclass classifier from multiple U sets based on empirical proportion risk minimization (EPRM) (Yu et al., 2014) , while the latter learns a binary classifier from multiple U sets  ... 
arXiv:2204.03304v2 fatcat:5tbybbcsivdljkutjyj5r6vmai

Learning with Multiplicative Perturbations [article]

Xiulong Yang, Shihao Ji
2020 arXiv   pre-print
We conduct a series of experiments that analyze the behavior of the multiplicative perturbations and demonstrate that xAT and xVAT match or outperform state-of-the-art classification accuracies across  ...  multiple established benchmarks while being about 30% faster than their additive counterparts.  ...  an input image, z ∈ {0, 1} P is a set of binary masks, and is the element-wise multiplication.  ... 
arXiv:1912.01810v2 fatcat:h2ynujs2nbb73jaz6yulof2oau

Convex Formulation for Learning from Positive and Unlabeled Data

Marthinus Christoffel du Plessis, Gang Niu, Masashi Sugiyama
2015 International Conference on Machine Learning  
We discuss binary classification from only positive and unlabeled data (PU classification), which is conceivable in various real-world machine learning problems.  ...  In this paper, we discuss a convex formulation for PU classification that can still cancel the bias. The key idea is to use different loss functions for positive and unlabeled samples.  ...  On the other hand, PU classification only requires to update the unlabeled dataset, which is much less costly.  ... 
dblp:conf/icml/PlessisNS15 fatcat:sso2uey3w5clxewugoi5ntybgu

A Convex Optimization Framework for Active Learning

Ehsan Elhamifar, Guillermo Sapiro, Allen Yang, S. Shankar Sasrty
2013 2013 IEEE International Conference on Computer Vision  
Active learning is the problem of progressively selecting and annotating the most informative unlabeled samples, in order to obtain a high classification performance.  ...  In many image/video/web classification problems, we have access to a large number of unlabeled samples. However, it is typically expensive and time consuming to obtain labels for the samples.  ...  Notice that one can run a single mode active learning method multiple times without retraining the classifier in order to select multiple unlabeled samples.  ... 
doi:10.1109/iccv.2013.33 dblp:conf/iccv/ElhamifarSYS13 fatcat:fbba3s3s3fcnxckxsxrxlooh7m

Adversarial Attacks on Graph Neural Networks via Meta Learning [article]

Daniel Zügner, Stephan Günnemann
2019 arXiv   pre-print
We investigate training time attacks on graph neural networks for node classification that perturb the discrete graph structure.  ...  We split the datasets into labeled (10%) and unlabeled (90%) nodes.  ...  Given a single (attributed) graph and a set of labeled nodes, the goal is to infer the class labels of the unlabeled nodes.  ... 
arXiv:1902.08412v1 fatcat:5zs52j55jnf5vkvv7ho6jhzd24

Combining Low-Density Separators with CNNs

Yu-Xiong Wang, Martial Hebert
2016 Neural Information Processing Systems  
Our key insight is to expose multiple top layer units to a massive set of unlabeled images, as shown in Figure 1 , which decouples these units from ties to the original specific set of categories.  ...  By encouraging these units to learn diverse sets of low-density separators across the unlabeled data, we capture a more generic, richer description of the visual world, which decouples these units from  ...  This is achieved by encouraging multiple top layer units to generate diverse sets of low-density separations across the unlabeled data in activation spaces, which decouples these units from ties to a specific  ... 
dblp:conf/nips/WangH16 fatcat:q4dq4gbs2jgovdmy2bqc5el2aa

Scalable Active Learning for Multiclass Image Classification

A. J. Joshi, F. Porikli, N. P. Papanikolopoulos
2012 IEEE Transactions on Pattern Analysis and Machine Intelligence  
Thorough empirical evaluation of classification accuracy, noise sensitivity, imbalanced data, and computational performance on a diverse set of image datasets demonstrates the strengths of the proposed  ...  First, we propose a new interaction modality for training which requires only yes-no type binary feedback instead of a precise category label.  ...  Indeed, most work on classification uses surrogates to estimate the misclassification risk in the absence of the test set.  ... 
doi:10.1109/tpami.2012.21 pmid:22997129 fatcat:mvu7p427fjckxo2ij4ii5r7jhy

Adversarial Attacks on Neural Networks for Graph Data

Daniel Zügner, Amir Akbarnejad, Stephan Günnemann
2018 Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining - KDD '18  
Deep learning models for graphs have achieved strong performance for the task of node classification.  ...  Our experimental study shows that accuracy of node classification significantly drops even when performing only few perturbations.  ...  We split the datasets into labeled (10%) and unlabeled (90%) nodes.  ... 
doi:10.1145/3219819.3220078 dblp:conf/kdd/ZugnerAG18 fatcat:ijuuqoe6cveiroejpb5knyxy6e

Embracing Imperfect Datasets: A Review of Deep Learning Solutions for Medical Image Segmentation [article]

Nima Tajbakhsh, Laura Jeyaseelan, Qian Li, Jeffrey Chiang, Zhihao Wu, Xiaowei Ding
2020 arXiv   pre-print
Recently, a large body of research has studied the problem of medical image segmentation with imperfect datasets, tackling two major dataset limitations: scarce annotations where only limited annotated  ...  Despite the new performance highs, the recent advanced segmentation models still require large, representative, and high quality annotated datasets.  ...  Mirikharaji and Hamarneh (2018) leverage a star shape prior via an extra loss term on top of a binary cross entropy loss.  ... 
arXiv:1908.10454v2 fatcat:mjvfbhx75bdkbheysq3r7wmhdi
« Previous Showing results 1 — 15 out of 1,423 results