20,112 Hits in 6.2 sec

Robust binary classification with the 01 loss [article]

Yunzhe Xue, Meiyan Xie, Usman Roshan
2020 arXiv   pre-print
The 01 loss is robust to outliers and tolerant to noisy data compared to convex loss functions. We conjecture that the 01 loss may also be more robust to adversarial attacks.  ...  On CIFAR10 binary classification task between classes 0 and 1 with adversarial perturbation of 0.0625 we see that the MLP01 network loses 27\% in accuracy whereas the MLP-logistic counterpart loses 83\  ...  For the task of binary classification on standard image recognition benchmarks we show that our linear 01 loss solver and the MLP01 loss are both as accurate as their convex counterparts, namely the linear  ... 
arXiv:2002.03444v1 fatcat:ylvdwpi2kvc5phf4nutnwqvk2y

Defending against substitute model black box adversarial attacks with the 01 loss [article]

Yunzhe Xue, Meiyan Xie, Usman Roshan
2020 arXiv   pre-print
We compare the accuracy of adversarial examples from substitute model black box attacks targeting our 01 loss models and their convex counterparts for binary classification on popular image benchmarks.  ...  The 01 loss model is known to be more robust to outliers and noise than convex models that are typically used in practice.  ...  METHODS 2.1 A dual layer 01 loss neural network The problem of determining the hyperplane with minimum number of misclassifications in a binary classification problem is known to be NP-hard [2] .  ... 
arXiv:2009.09803v1 fatcat:ez3mtguokbhhvcllhzf4w63udy

Towards adversarial robustness with 01 loss neural networks [article]

Yunzhe Xue, Meiyan Xie, Usman Roshan
2020 arXiv   pre-print
on the CIFAR10 benchmark binary classification between classes 0 and 1.  ...  Motivated by the general robustness properties of the 01 loss we propose a single hidden layer 01 loss neural network trained with stochastic coordinate descent as a defense against adversarial attacks  ...  on the CIFAR10 benchmark binary classification between classes 0 and 1.  ... 
arXiv:2008.09148v1 fatcat:lkvy5tztazco7djd7r2rnwcdgq

Adversarial Risk Bounds via Function Transformation [article]

Justin Khim, Po-Ling Loh
2019 arXiv   pre-print
Specifically, we introduce a new class of function transformations with the property that the risk of the transformed functions upper-bounds the adversarial risk of the original functions.  ...  We also discuss extensions of our theory to multiclass classification and regression.  ...  The indicator loss (also known as the 01-loss) is defined by ℓ 01 ( , ) = 1 {sgn ( ) = } , and is of primary interest in classification.  ... 
arXiv:1810.09519v2 fatcat:64ppkcuppnbxhdkaqbgduhuogm

Robust Forecasting [article]

Timothy Christensen, Hyungsik Roger Moon, Frank Schorfheide
2020 arXiv   pre-print
Finally, we derive "efficient robust" forecasts to deal with the problem of first having to estimate the set of forecast distributions and develop a suitable asymptotic efficiency theory.  ...  Forecasts obtained by replacing nuisance parameters that characterize the set of forecast distributions with efficient first-stage estimators can be strictly dominated by our efficient robust forecasts  ...  Binary (or Classification) Loss.  ... 
arXiv:2011.03153v4 fatcat:lmeqa27uongj5iogsc3ctrhcuy

On the transferability of adversarial examples between convex and 01 loss models [article]

Yunzhe Xue, Meiyan Xie, Usman Roshan
2020 arXiv   pre-print
As a result of this non-transferability we show that our dual layer sign activation network with 01 loss can attain robustness on par with simple convolutional networks.  ...  The 01 loss gives different and more accurate boundaries than convex loss models in the presence of outliers.  ...  Background The problem of determining the hyperplane with minimum number of misclassifications in a binary classification problem is known to be NP-hard [15] .  ... 
arXiv:2006.07800v2 fatcat:gjqy6juvijhn7fqqyp5g7xathq

An Interpretable Computer-Aided Diagnosis Method for Periodontitis From Panoramic Radiographs

Haoyang Li, Juexiao Zhou, Yi Zhou, Qiang Chen, Yangyang She, Feng Gao, Ying Xu, Jieyu Chen, Xin Gao
2021 Frontiers in Physiology  
In our method, alveolar bone loss (ABL), the clinical hallmark for periodontitis diagnosis, could be interpreted as the key feature.  ...  and health care settings with limited dental professionals.  ...  ACKNOWLEDGMENTS We thank He Zhang, Yi Zhang, Yongwei Tan, Nana Liu, and Ying Zhao at The Affiliated Stomatological Hospital of Soochow University for providing the data.  ... 
doi:10.3389/fphys.2021.655556 fatcat:cyt72i44d5drjjxrxra6x7k6oy

Locally Weighted Score Estimation for Quantile Classification in Binary Regression Models

John D. Rice, Jeremy M. G. Taylor
2016 Statistics in Biosciences  
Keywords binary classification; local likelihood; logistic regression; asymmetric loss; robust estimation types of errors possible in this setting: missing a metastatic cancer by not performing the biopsy  ...  This work has much in common with robust estimation, but diers from previous approaches in this area in its focus on prediction, specifically classification into high-and low-risk groups.  ...  The authors also thank Michael Sabel for providing the melanoma sentinel lymph node biopsy data.  ... 
doi:10.1007/s12561-016-9147-y pmid:28018492 pmcid:PMC5173294 fatcat:3xer7v4dsbdapogb7dhzjihqum

Construction of non-convex polynomial loss functions for training a binary classifier with quantum annealing [article]

Ryan Babbush, Vasil Denchev, Nan Ding, Sergei Isakov, Hartmut Neven
2014 arXiv   pre-print
We show that these loss functions are robust to label noise and provide a clear advantage over convex methods.  ...  Because experimental considerations constrain our objective function to take the form of a low degree PUBO (polynomial unconstrained binary optimization), we employ non-convex loss functions which are  ...  Perhaps the simplest loss function is the 0-1 loss function which provides a correct classification with penalty 0 and an incorrect classification with penalty 1, L 01 (γ i ) ≡ 1 − sign (γ i ) 2 . (1)  ... 
arXiv:1406.4203v1 fatcat:nmri7mwwp5asrnymdmdpurdd24

A General Retraining Framework for Scalable Adversarial Classification [article]

Bo Li, Yevgeniy Vorobeychik, Xinyun Chen
2016 arXiv   pre-print
Traditional classification algorithms assume that training and test data come from similar distributions.  ...  We propose the first systematic and general-purpose retraining framework which can: a) boost robustness of an arbitrary learning algorithm, in the face of b) a broader class of adversarial models than  ...  Let L * A,01 (O) correspond to the total adversarial risk in Equation 2, where the loss function l(g β (x), y) is the 0/1 loss. Suppose that O L uses coordinate greedy with L random restarts.  ... 
arXiv:1604.02606v2 fatcat:ebeg5nnokjb7pcr6yuvplzt2ee

Boosting in the presence of label noise [article]

Jakramate Bootkrajang, Ata Kaban
2013 arXiv   pre-print
However, pairing it with the new robust Boosting algorithm we propose here results in a more resilient algorithm under mislabelling.  ...  One is to employ a label-noise robust classifier as a base learner, while the other is to modify the AdaBoost algorithm to be more robust.  ...  JB thanks the Royal Thai Government for financial support.  ... 
arXiv:1309.6818v1 fatcat:itq5a5uscrgzdggbu7yibohswq

Adaptively Weighted Large Margin Classifiers

Yichao Wu, Yufeng Liu
2013 Journal of Computational And Graphical Statistics  
In this paper, we propose a new weighted large margin classification technique. The weights are chosen adaptively with data.  ...  The proposed classifiers are shown to be robust to outliers and thus are able to produce more accurate classification results.  ...  Levine, the associate editor, and two referees for their constructive comments and suggestions that led to significant improvement of the article.  ... 
doi:10.1080/10618600.2012.680866 pmid:24363545 pmcid:PMC3867158 fatcat:5fs4dirhynh5jb4nzouqarrcsa

Resilient linear classification

Sangdon Park, James Weimer, Insup Lee
2017 Proceedings of the 8th International Conference on Cyber-Physical Systems - ICCPS '17  
Specifically, a generic metric is proposed that is tailored to measure resilience of classification algorithms with respect to worst-case tampering of the training data.  ...  To overcome these limitations, we propose a linear classification algorithm with a majority constraint and prove that it is strictly more resilient than the traditional algorithms.  ...  SETUP FOR RESILIENT BINARY CLASSIFICATION is section introduces essential de nitions that are the bases for describing resilient binary classi cation problem.  ... 
doi:10.1145/3055004.3055006 dblp:conf/iccps/ParkWL17 fatcat:xnfm7uytbfayzd5gllft4bcgfa

Learning with Symmetric Label Noise: The Importance of Being Unhinged [article]

Brendan van Rooyen and Aditya Krishna Menon and Robert C. Williamson
2015 arXiv   pre-print
Convex potential minimisation is the de facto approach to binary classification.  ...  Experiments confirm the SLN-robustness of the unhinged loss.  ...  Binary classification is concerned with the risk corresponding to the zero-one loss, 01 : (y, v) → yv < 0 + 1 2 v = 0 .  ... 
arXiv:1505.07634v1 fatcat:blmmqgbp3jgffj672qzmvoio7q

Improved Gradient based Adversarial Attacks for Quantized Networks [article]

Kartik Gupta, Thalaiyasingam Ajanthan
2021 arXiv   pre-print
Despite being a simple modification to existing gradient based adversarial attacks, experiments on multiple image classification datasets with multiple network architectures demonstrate that our temperature  ...  show a fake sense of robustness.  ...  Adversarial Attacks and Robustness of Binary Networks.  ... 
arXiv:2003.13511v2 fatcat:7m5afvgqujbfnhiifzvx6vb57q
« Previous Showing results 1 — 15 out of 20,112 results