34 Hits in 7.4 sec

Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [article]

Maksym Andriushchenko, Matthias Hein
2019 arXiv   pre-print
Moreover, the robust test error rates we achieve are competitive to the ones of provably robust convolutional networks.  ...  The problem of adversarial robustness has been studied extensively for neural networks.  ...  Acknowledgements We thank the anonymous reviewers for very helpful and thoughtful comments.  ... 
arXiv:1906.03526v2 fatcat:icppxp5j3ja4vcsoqnlxfm2st4

A theory of multiclass boosting [article]

Indraneel Mukherjee, Robert E. Schapire
2011 arXiv   pre-print
In this paper, we create a broad and general framework, within which we make precise and identify the optimal requirements on the weak-classifier, as well as design the most effective, in a certain sense  ...  Although the case of binary classification is well understood, in the multiclass setting, the "correct" requirements on the weak classifier, or the notion of the most efficient boosting algorithms are  ...  Acknowledgments This research was funded by the National Science Foundation under grants IIS-0325500 and IIS-1016029.  ... 
arXiv:1108.2989v1 fatcat:hicnwicpsre3vathstinf6acrm

A Framework for Enhancing Deep Neural Networks Against Adversarial Malware [article]

Deqiang Li, Qianmu Li, Yanfang Ye, Shouhuai Xu
2021 arXiv   pre-print
Under the guidance of these six principles, we propose a defense framework to enhance the robustness of deep neural networks against adversarial malware evasion attacks.  ...  As a response to the adversarial malware classification challenge organized by the MIT Lincoln Lab and associated with the AAAI-19 Workshop on Artificial Intelligence for Cyber Security (AICS'2019), we  ...  Turning Principles into A Framework The principles discussed above guide us to propose a framework for adversarial malware classification and detection, which is highlighted in Figure 1 and elaborated  ... 
arXiv:2004.07919v2 fatcat:ug4ix4sbdjdkjfniqaabdchaga

Towards Evaluating the Robustness of Deep Diagnostic Models by Adversarial Attack

Mengting Xu, Tao Zhang, Zhongnian Li, Mingxia Liu, Daoqiang Zhang
2021 Medical Image Analysis  
labels for both original/clean images and those adversarial ones.  ...  The so-called "adversarial example" is a well-designed perturbation that is not easily perceived by humans but results in a false output of deep diagnostic models with high confidence.  ...  In the future, we could investigate the effectiveness of differentiation of correctly classified/misclassified training examples in the recently proposed certified/provable robustness framework and explore  ... 
doi:10.1016/ pmid:33550005 fatcat:dyyp4d24hvduto4gknjufups7e

Adaptive Diffusions for Scalable Learning over Graphs [article]

Dimitris Berberidis, Athanasios N. Nikolakopoulos, Georgios B. Giannakis
2018 arXiv   pre-print
Furthermore, a robust version of the classifier facilitates learning even in noisy environments.  ...  different for each class.  ...  We test our methods in terms of multiclass and multilabel classification accuracy, and confirm that it can achieve results competitive to state-of-the-art methods, while also being considerably faster.  ... 
arXiv:1804.02081v2 fatcat:gqy6jnnzwnge3n6srko26fgqoq

Identifying and Exploiting Structures for Reliable Deep Learning [article]

Amartya Sanyal
2021 arXiv   pre-print
Deep learning research has recently witnessed an impressively fast-paced progress in a wide range of tasks including computer vision, natural language processing, and reinforcement learning.  ...  However, as recent works point out, these systems suffer from several issues that make them unreliable for use in the real world, including vulnerability to adversarial attacks (Szegedy et al. [248]),  ...  earlier regarding the need for more complex boundaries to achieve adversarial robustness [64, 238] .  ... 
arXiv:2108.07083v1 fatcat:lducrn5tlfeqvpxevz6gukfvse

A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [article]

Sicheng Zhao, Xiangyu Yue, Shanghang Zhang, Bo Li, Han Zhao, Bichen Wu, Ravi Krishna, Joseph E. Gonzalez, Alberto L. Sangiovanni-Vincentelli, Sanjit A. Seshia, Kurt Keutzer
2020 arXiv   pre-print
We then summarize and compare different categories of single-source unsupervised domain adaptation methods, including discrepancy-based methods, adversarial discriminative methods, adversarial generative  ...  In this paper, we review the latest single-source deep unsupervised domain adaptation methods focused on visual tasks and discuss new perspectives for future research.  ...  However, all the current DA work only focus on boosting the performance on the target domain, without any consideration on the robustness of the adapted model.  ... 
arXiv:2009.00155v3 fatcat:yqkew4n4q5gtbjosozufw37ome

Differentially Private Synthetic Data: Applied Evaluations and Enhancements [article]

Lucas Rosenblatt, Xiaoyan Liu, Samira Pouyanfar, Eduardo de Leon, Anuj Desai, Joshua Allen
2020 arXiv   pre-print
In this paper, we survey four differentially private generative adversarial networks for data synthesis.  ...  Our results suggest some synthesizers are more applicable for different privacy budgets, and we further demonstrate complicating domain-based tradeoffs in selecting an approach.  ...  ACKNOWLEDGEMENTS The authors would like to thank Soundar Srinivasan and Vijay Ramani of the Microsoft AI Development and Acceleration program for feedback throughout the project.  ... 
arXiv:2011.05537v1 fatcat:ozosb6sjevhlnd673tk5vwmnuq

Table of contents

2021 ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
Marques, King Juan Carlos University, Spain SPTM-13: MODELS, METHODS AND ALGORITHMS 1 SPTM-13.1: FAST AND PROVABLE ROBUST PCA VIA NORMALIZED COHERENCE .................................... 5454 PURSUIT  ...  NETWORK Zhengyang Wang, Sheng Chen, Wei Yang, Yang Xu, University of Science and Technology of China, China MLSP-21.4: A ROBUST TO NOISE ADVERSARIAL RECURRENT MODEL FOR ...............................  ... 
doi:10.1109/icassp39728.2021.9414617 fatcat:m5ugnnuk7nacbd6jr6gv2lsfby

Trusted Artificial Intelligence: Towards Certification of Machine Learning Applications [article]

Philip Matthias Winter, Sebastian Eder, Johannes Weissenböck, Christoph Schwald, Thomas Doms, Tom Vogt, Sepp Hochreiter, Bernhard Nessler
2021 arXiv   pre-print
Therefore, the T\"UV AUSTRIA Group in cooperation with the Institute for Machine Learning at the Johannes Kepler University Linz, proposes a certification process and an audit catalog for Machine Learning  ...  However, reliance on such technical systems is crucial for their widespread applicability and acceptance.  ...  Robustness against attacks: Adversarial attacks [65] pose a threat to the safety of ML applications.  ... 
arXiv:2103.16910v1 fatcat:xd37dtaxr5brjmljzvp3sr6lqa

Statistical and Algorithmic Insights for Semi-supervised Learning with Self-training [article]

Samet Oymak, Talha Cihad Gulcu
2020 arXiv   pre-print
We then demonstrate that regularization and class margin (i.e. separation) is provably important for the success and lack of regularization may prevent self-training from identifying the core features  ...  We then establish a connection between self-training based semi-supervision and the more general problem of learning with heterogenous data and weak supervision.  ...  The papers [11, 29, 39, 46] show theoretically and empirically how semi-supervised learning procedure can achieve high robust accuracy and improve adversarial robustness.  ... 
arXiv:2006.11006v1 fatcat:rprnkp7irbae5bmky6yk4vn5ze

Pairing Conceptual Modeling with Machine Learning [article]

Wolfgang Maass, Veda C. Storey
2021 arXiv   pre-print
We then examine how conceptual modeling can be applied to machine learning and propose a framework for incorporating conceptual modeling into data science projects.  ...  The framework is illustrated by applying it to a healthcare application.  ...  The authors wish to thank Peter Chen and Carson Woo, and Oscar Pastor for their support of this paper, Iaroslav Shcherbatyi for sharing his technical expertise in machine learning and Michael Schrefl for  ... 
arXiv:2106.14251v1 fatcat:n4kujuzttja67jqjs3vz3bdiba

Robust image classification:analysis and applications

Alhussein Fawzi
Upper bound on the adversarial robustness We now introduce our theoretical framework for analyzing the robustness to adversarial perturbations.  ...  In more details, we first propose a formal definition of the average robustness to nuisance, and provide a provably efficient Monte-Carlo estimate.  ...  The goal is now to extend the previous result, derived for binary classifiers, to the multiclass classification case. To do so, we show the following lemma. Lemma 9 (Binary case to multiclass).  ... 
doi:10.5075/epfl-thesis-7258 fatcat:7f66modzezgwbegcnmsbt4evaa

Self-training Avoids Using Spurious Features Under Domain Shift [article]

Yining Chen, Colin Wei, Ananya Kumar, Tengyu Ma
2020 arXiv   pre-print
We verify our theory for spurious domain shift tasks on semi-synthetic Celeb-A and MNIST datasets.  ...  For this setting, we prove that entropy minimization on unlabeled target data will avoid using the spurious feature if initialized with a decently accurate source classifier, even though the objective  ...  [9] show that self-training on unlabeled data can improve adversarially robust generalization for linear models in a Gaussian setting.  ... 
arXiv:2006.10032v3 fatcat:hxisjz7wpjbqlmebd7wryezqgi

Robust Learning under Distributional Shifts [article]

Yogesh Balaji
We develop a likelihood estimation framework based on deep generative models for this task.  ...  Designing robust models is critical for reliable deployment of artificial intelligence systems.  ...  Acknowledgments First and foremost, I would like to thank my advisors Rama Chellappa and Soheil Feizi. This dissertation wouldn't be possible without your valuable guidance and support.  ... 
doi:10.13016/p0ih-yx4j fatcat:54q3zps7rnaexeduqqd4sogvnm
« Previous Showing results 1 — 15 out of 34 results