Filters








3,848 Hits in 3.2 sec

Constructing Multiclass Classifiers using Binary Classifiers Under Log-Loss [article]

Assaf Ben-Yishai, Or Ordentlich
2021 arXiv   pre-print
The construction of multiclass classifiers from binary elements is studied in this paper, and performance is quantified by the regret, defined with respect to the Bayes optimal log-loss.  ...  The first is one vs. all (OVA), for which we prove that the multiclass regret is upper bounded by the sum of binary regrets of the constituent classifiers.  ...  CONCLUDING REMARKS We have studied the problem of soft classification under log-loss and have focused on constructions of multiclass classifiers from binary classifiers.  ... 
arXiv:2102.08184v2 fatcat:qwrc7f5eubf4pksmiylz3f7vzi

Error-Correcting Tournaments [chapter]

Alina Beygelzimer, John Langford, Pradeep Ravikumar
2009 Lecture Notes in Computer Science  
We present a family of pairwise tournaments reducing k-class classification to binary classification.  ...  These reductions are provably robust against a constant fraction of binary errors, and match the best possible computation and regret up to a constant.  ...  Utilizing this observation, we construct a reduction, called the Filter Tree, with the property that it uses O(log k) binary examples and O(log k) computation at training and test time with a multiclass  ... 
doi:10.1007/978-3-642-04414-4_22 fatcat:et3ish4wfvg7vphxwnu3k3sonm

Error-Correcting Tournaments [article]

Alina Beygelzimer, John Langford, Pradeep Ravikumar
2010 arXiv   pre-print
We present a family of pairwise tournaments reducing k-class classification to binary classification. These reductions are provably robust against a constant fraction of binary errors.  ...  The results improve on the PECOC construction SECOC with an exponential improvement in computation, from O(k) to O(_2 k), and the removal of a square root in the regret dependence, matching the best possible  ...  Utilizing this observation, we construct a reduction, called the filter tree, which uses a O(log k) computation per multiclass example at both training and test time, and whose multiclass regret is bounded  ... 
arXiv:0902.3176v4 fatcat:2jysxunlfne3dldzfukuxjreo4

Multiclass Learning by Probabilistic Embeddings

Ofer Dekel, Yoram Singer
2002 Neural Information Processing Systems  
Furthermore, the method of multiclass categorization using ECOC is shown to be an instance of Bunching.  ...  A key construction in the analysis of the algorithm is the notion of probabilistic output codes, a generalization of error correcting output codes (ECOC).  ...  then solving each binary problem individually to obtain a multiclass classifier.  ... 
dblp:conf/nips/DekelS02 fatcat:cvki6cpmevd2pfsu3x4fxqhwry

Multiclass classification of microarray data samples with a reduced number of genes

Elizabeth Tapia, Leonardo Ornella, Pilar Bulacio, Laura Angelone
2011 BMC Bioinformatics  
Conclusions: A comprehensive experimental work shows that the bound is indeed useful to induce accurate and sparse multiclass classifiers for microarray data samples.  ...  The bound suggests that high-dimensional binary output domains might favor the existence of accurate and sparse binary mediated multiclass classifiers for microarray data samples.  ...  Multiclass classifiers for M ≥ 3 classes built from n binary classifiers, n ≥ ⌈log 2 M⌉ + 2, are considered.  ... 
doi:10.1186/1471-2105-12-59 pmid:21342522 pmcid:PMC3056725 fatcat:7i24ujejh5bt5ffks5i3mhlk7a

Better multiclass classification via a margin-optimized single binary problem

Ran El-Yaniv, Dmitry Pechyony, Elad Yom-Tov
2008 Pattern Recognition Letters  
We develop a new multiclass classification method that reduces the multiclass problem to a single binary classifier (SBC).  ...  We provide a bound on the generalization error of the multiclass classifier obtained with our construction and outline the conditions for its consistency.  ...  Risk Bound Let M (h α,µ , (x, y) ) be the multiclass 0/1 loss of the SBC classifier, using the hypothesis h α,µ for its binary decisions, over a multiclass example (x, y).  ... 
doi:10.1016/j.patrec.2008.06.012 fatcat:3ohttlc63zhy7lj4zkonb5q4r4

Efficient Loss-Based Decoding on Graphs For Extreme Classification [article]

Itay Evron, Edward Moroshko, Koby Crammer
2018 arXiv   pre-print
We show how to find the sweet spot of this tradeoff using only the training data.  ...  Our framework employs output codes induced by graphs, for which we show how to perform efficient loss-based decoding to potentially improve accuracy.  ...  Few well known loss functions are the hinge loss L(z) max (0, 1 − z), used by SVM, its square, the log loss L (z) log (1 + e −z ) used in logistic regression, and the exponential loss L (z) e −z used in  ... 
arXiv:1803.03319v2 fatcat:7j5auzag3fcnboqgkco4o54u5u

Consistent Classification with Generalized Metrics [article]

Xiaoyan Wang, Ran Li, Bowei Yan, Oluwasanmi Koyejo
2019 arXiv   pre-print
We propose a framework for constructing and analyzing multiclass and multioutput classification metrics, i.e., involving multiple, possibly correlated multiclass labels.  ...  Further, we analyze averaging methodologies commonly used to compute multioutput metrics and characterize the corresponding Bayes optimal classifiers.  ...  Tewari and Bartlett [26] showed that multiclass classifiers constructed using consistent binary classifiers may still lead to inconsistent multiclass results. Narasimhan et al.  ... 
arXiv:1908.09057v1 fatcat:i473x56wvbbjjnivdejqmwwive

Consistent Multiclass Algorithms for Complex Performance Measures

Harikrishna Narasimhan, Harish G. Ramaswamy, Aadirupa Saha, Shivani Agarwal
2015 International Conference on Machine Learning  
This setting includes as a special case all loss-based performance measures, which are simply linear functions of the confusion matrix, but also includes more complex performance measures such as the multiclass  ...  The resulting algorithms are provably consistent and outperform a multiclass version of the state-of-the-art SVMperf method in experiments; for large multiclass problems, the algorithms are also orders  ...  SA acknowledges support from the Department of Science & Technology (DST) of the Indian Government under a Ramanujan Fellowship, from the Indo-US Science & Technology Forum (IUSSTF), and from Yahoo in  ... 
dblp:conf/icml/NarasimhanRS015 fatcat:4bvqvdcy2zdijnekeyed7grble

Convex Optimization for Binary Classifier Aggregation in Multiclass Problems [article]

Sunho Park, TaeHyun Hwang, Seungjin Choi
2014 arXiv   pre-print
Multiclass problems are often decomposed into multiple binary problems that are solved by individual binary classifiers whose results are integrated into a final answer.  ...  In this paper we present a convex optimization method for an optimal aggregation of binary classifiers to estimate class membership probabilities in multiclass problems.  ...  Direct approach involves constructing a discriminant function directly for the multiclass problem.  ... 
arXiv:1401.4143v1 fatcat:x6cbx2f3obfv7ilp6i527a37sm

Convex Optimization for Binary Classifier Aggregation in Multiclass Problems [chapter]

Sunho Park, Tae Hyun Hwang, Seungjin Choi
2014 Proceedings of the 2014 SIAM International Conference on Data Mining  
Multiclass problems are often decomposed into multiple binary problems that are solved by individual binary classifiers whose results are integrated into a final answer.  ...  In this paper we present a convex optimization method for an optimal aggregation of binary classifiers to estimate class membership probabilities in multiclass problems.  ...  Direct approach involves constructing a discriminant function directly for the multiclass problem.  ... 
doi:10.1137/1.9781611973440.32 dblp:conf/sdm/ParkHC14 fatcat:ejuczff4rzg7tjmkjvttm6dwj4

A cryptographic approach to black box adversarial machine learning [article]

Kevin Shi, Daniel Hsu, Allison Bishop
2020 arXiv   pre-print
Our construction crucially leverages hidden randomness in the multiclass-to-binary reduction.  ...  Our proof constructs a new security problem for random binary classifiers which is easier to empirically verify and a reduction from the security of this new model to the security of the ensemble classifier  ...  All random binary classifiers used in these experiments are the same architecture as the random binary classifiers in Section 4.a.  ... 
arXiv:1906.03231v2 fatcat:a5lwo22jhfbajhmivnbkhlzu54

On the Consistency of Output Code Based Learning Algorithms for Multiclass Learning Problems

Harish G. Ramaswamy, Balaji Srinivasan Babu, Shivani Agarwal, Robert C. Williamson
2014 Annual Conference Computational Learning Theory  
We focus on settings where the binary problems are solved by minimizing a binary surrogate loss, and derive general conditions on the binary surrogate loss under which the one-vs-all and all-pairs code  ...  We then consider general multiclass learning problems defined by a general multiclass loss, and derive conditions on the output code matrix and binary surrogates under which the resulting algorithm is  ...  SA acknowledges support from the Department of Science & Technology (DST) of the Indian Government under a Ramanujan Fellowship, from the Indo-US Science & Technology Forum (IUSSTF), and from Yahoo in  ... 
dblp:conf/colt/RamaswamyBAW14 fatcat:jizxzymiizbghepw2to27orqte

Error limiting reductions between classification tasks

Alina Beygelzimer, Varsha Dani, Tom Hayes, John Langford, Bianca Zadrozny
2005 Proceedings of the 22nd international conference on Machine learning - ICML '05  
We use this model to devise a new reduction from multi-class cost-sensitive classification to binary classification with the following guarantee: If the learned binary classifier has error rate at most  ...  Since cost-sensitive classification can embed any bounded loss finite choice supervised learning task, this result shows that any such task can be solved using a binary classification oracle.  ...  (Tree error efficiency) For all multiclass problems (D, X, T ), if the binary classifiers have loss rate , the tree reduction has loss rate at most log 2 r. Proof.  ... 
doi:10.1145/1102351.1102358 dblp:conf/icml/BeygelzimerDHLZ05 fatcat:pb2i5k7v4jaxrji4jj3wsgpkce

Convex Calibrated Surrogates for the Multi-Label F-Measure [article]

Mingyuan Zhang, Harish G. Ramaswamy, Shivani Agarwal
2020 arXiv   pre-print
limit of sufficient data) a Bayes optimal multi-label classifier for the F-measure.  ...  In this paper, we explore the question of designing convex surrogate losses that are calibrated for the F-measure – specifically, that have the property that minimizing the surrogate loss yields (in the  ...  SA is also supported in part by the US National Institutes of Health (NIH) under Grant No. U01CA214411.  ... 
arXiv:2009.07801v1 fatcat:jfk5shpu75aardsxrg6lxctaiq
« Previous Showing results 1 — 15 out of 3,848 results