Filters








1,085 Hits in 3.2 sec

Boosting Fast Adversarial Training with Learnable Adversarial Initialization [article]

Xiaojun Jia, Yong Zhang, Baoyuan Wu, Jue Wang, Xiaochun Cao
2022 arXiv   pre-print
In this paper, we boost fast AT with a sample-dependent adversarial initialization, i.e., an output from a generative network conditioned on a benign image and its gradient information from the target  ...  To boost training efficiency, fast gradient sign method (FGSM) is adopted in fast AT methods by calculating gradient only once. Unfortunately, the robustness is far from satisfactory.  ...  CONCLUSION In this paper, we propose a sample-dependent adversarial initialization to boost fast AT.  ... 
arXiv:2110.05007v3 fatcat:5xtdcw4bprcndjlqt4gl5pqjra

Learnable Boundary Guided Adversarial Training [article]

Jiequan Cui, Shu Liu, Liwei Wang, Jiaya Jia
2021 arXiv   pre-print
Previous adversarial training raises model robustness under the compromise of accuracy on natural data. In this paper, we reduce natural accuracy degradation.  ...  Our solution is to constrain logits from the robust model that takes adversarial examples as input and makes it similar to those from the clean model fed with corresponding natural data.  ...  Instead of constraining M robust with the classifier boundary from one well trained static M natural , we further generalize our method to Learnable Boundary Guided Adversarial Training (LBGAT) by training  ... 
arXiv:2011.11164v2 fatcat:xgfqgpkaeva2zipuy4377bmzx4

Learning Black-Box Attackers with Transferable Priors and Query Feedback [article]

Jiancheng Yang, Yangzhou Jiang, Xiaoyang Huang, Bingbing Ni, Chenglong Zhao
2020 arXiv   pre-print
This paper addresses the challenging black-box adversarial attack problem, where only classification confidence of a victim model is available.  ...  The SimBA++ and HOGA result in Learnable Black-Box Attack (LeBA), which surpasses previous state of the art by considerable margins: the proposed LeBA significantly reduces queries, while keeping higher  ...  adversarial training [25] .  ... 
arXiv:2010.11742v1 fatcat:jymftjqk3baxzfbgq4zxc2kxtq

Double-Win Quant: Aggressively Winning Robustness of Quantized Deep Neural Networks via Random Precision Training and Inference

Yonggan Fu, Qixuan Yu, Meng Li, Vikas Chandra, Yingyan Lin
2021 International Conference on Machine Learning  
Specifically, we for the first time identify that when an adversarially trained model is quantized to different precisions in a post-training manner, the associated adversarial attacks transfer poorly  ...  However, quantized DNNs are vulnerable to adversarial attacks unless being equipped with sophisticated techniques, leading to a dilemma of struggling between DNNs' efficiency and robustness.  ...  Here we benchmark our RPI and RPT techniques with SOTA adversarial training methods trained with full precision to validate their superior "win-win" in terms of boosting both model robustness and efficiency  ... 
dblp:conf/icml/FuY0CL21 fatcat:ogeuv3jwazg23ixvjw4qzuyay4

Wavelet Regularization Benefits Adversarial Training [article]

Jun Yan, Huilin Yin, Xiaoyang Deng, Ziming Zhao, Wancheng Ge, Hao Zhang, Gerhard Rigoll
2022 arXiv   pre-print
Faced with these challenges, we make a theoretical analysis on the regularization property of wavelets which can enhance adversarial training.  ...  Many regularization methods have been proven to be effective with the combination of adversarial training. Nevertheless, such regularization methods are implemented in the time domain.  ...  Other eight methods are trained with the WideResNet models in the adversarial training settings: Classical Adversarial Training [3] , Fast Adversarial Training [62] , Free Adversarial Training [63]  ... 
arXiv:2206.03727v1 fatcat:gn57ndlzujdp7dje2vssxjr5re

Towards Efficient Adversarial Training on Vision Transformers [article]

Boxi Wu, Jindong Gu, Zhifeng Li, Deng Cai, Xiaofei He, Wei Liu
2022 arXiv   pre-print
With only 65\% of the fast adversarial training time, we match the state-of-the-art results on the challenging ImageNet benchmark.  ...  In this work, we first comprehensively study fast adversarial training on a variety of vision transformers and illustrate the relationship between the efficiency and robustness.  ...  However, adversarial training is known to suffer from complexity issues [62, 81, 54] . Particularly, Fast AT [81] enhances the single-step adversarial training with random initialization.  ... 
arXiv:2207.10498v1 fatcat:lso77acxnng2rgn4jawagwgaqu

Gabor Layers Enhance Network Robustness [article]

Juan C. Pérez, Motasem Alfarra, Guillaume Jeanneret, Adel Bibi, Ali Thabet, Bernard Ghanem, Pablo Arbeláez
2020 arXiv   pre-print
are based on learnable Gabor parameters.  ...  In particular, we explore the effect on robustness against adversarial attacks of replacing the first layers of various deep architectures with Gabor layers, i.e. convolutional layers with filters that  ...  learnable parameters.  ... 
arXiv:1912.05661v2 fatcat:5lcxkaj44rhj7gbh4exlrntpyq

A Fast and Efficient Conditional Learning for Tunable Trade-Off between Accuracy and Robustness [article]

Souvik Kundu, Sairam Sundaresan, Massoud Pedram, Peter A. Beerel
2022 arXiv   pre-print
In this paper, we present a fast learnable once-for-all adversarial training (FLOAT) algorithm, which instead of the existing FiLM-based conditioning, presents a unique weight conditioned learning that  ...  requires no additional layer, thereby incurring no significant increase in parameter count, training time, or network latency compared to standard adversarial training.  ...  First, in view of the above concerns, we present a fast learnable once-for-all adversarial training (FLOAT).  ... 
arXiv:2204.00426v1 fatcat:iucc7cemtfdhrhkki5blafydou

A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning [article]

Hongjun Wang, Guanbin Li, Xiaobai Liu, Liang Lin
2020 arXiv   pre-print
Moreover, we revisit the reason for high computational cost of adversarial training under the view of MCMC and design a new generative method called Contrastive Adversarial Training (CAT), which approaches  ...  In this paper, we present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.  ...  M-H algorithm distributes the generating samples to staying in high-density regions of the Algorithm 3 Contrastive Adversarial Training Input: A DNN classifier f ω (·) with initial learnable parameters  ... 
arXiv:2010.07849v1 fatcat:yeuv5nd4sreonmhehpxyent3n4

Adversarial Robustness under Long-Tailed Distribution [article]

Tong Wu, Ziwei Liu, Qingqiu Huang, Yu Wang, Dahua Lin
2021 arXiv   pre-print
We then perform a systematic study on existing long-tailed recognition methods in conjunction with the adversarial training framework.  ...  To push adversarial robustness towards more realistic scenarios, in this work we investigate the adversarial vulnerability as well as defense under long-tailed distributions.  ...  This research was conducted in collaboration with SenseTime.  ... 
arXiv:2104.02703v3 fatcat:pccjil7y2vbn5ai5nfb42t265e

Robust Binary Models by Pruning Randomly-initialized Networks [article]

Chen Liu, Ziqi Zhao, Sabine Süsstrunk, Mathieu Salzmann
2022 arXiv   pre-print
Unlike adversarial training, which learns the model parameters, we in contrast learn the structure of the robust model by pruning a randomly-initialized binary network.  ...  We propose ways to obtain robust models against adversarial attacks from randomly-initialized binary networks.  ...  Furthermore, we introduce a normalization layer to facilitate training and boost performance.  ... 
arXiv:2202.01341v1 fatcat:pdau6ykj4rcjnfuhiyrnxu55u4

Weight-Covariance Alignment for Adversarially Robust Neural Networks [article]

Panagiotis Eustratiadis, Henry Gouk, Da Li, Timothy Hospedales
2021 arXiv   pre-print
However, existing SNNs are usually heuristically motivated, and often rely on adversarial training, which is computationally costly.  ...  We propose a new SNN that achieves state-of-the-art performance without relying on adversarial training, and enjoys solid theoretical justification.  ...  To show this, we train our best variant with PGD in two settings: purely adversarial training (AT) and mixed clean and adversarial training (CT+AT).  ... 
arXiv:2010.08852v3 fatcat:upxj2lg4o5epxnpj7jh44m3wye

Building Robust Ensembles via Margin Boosting [article]

Dinghuai Zhang, Hongyang Zhang, Aaron Courville, Yoshua Bengio, Pradeep Ravikumar, Arun Sai Suggala
2022 arXiv   pre-print
Empirically, we show that replacing the CE loss in state-of-the-art adversarial training techniques with our MCE loss leads to significant performance improvement.  ...  We view this problem from the perspective of margin-boosting and develop an algorithm for learning an ensemble with maximum margin.  ...  Training and Attack Details. In boosting experiments, the number of parameters of the five ResNet-18 ensemble is 55869810, while the deeper ResNet-158 model has 58156618 learnable parameters.  ... 
arXiv:2206.03362v1 fatcat:mzlkk3af3fgcdl5fmdcsoqsys4

Open problems in the security of learning

Marco Barreno, Peter L. Bartlett, Fuching Jack Chi, Anthony D. Joseph, Blaine Nelson, Benjamin I.P. Rubinstein, Udam Saini, J. D. Tygar
2008 Proceedings of the 1st ACM workshop on Workshop on AISec - AISec '08  
First, we suggest that finding bounds on adversarial influence is important to understand the limits of what an attacker can and cannot do to a learning system.  ...  Second, we investigate the value of adversarial capabilities-the success of an attack depends largely on what types of information and influence the attacker has.  ...  The notion of ACRE-learnability characterizes learners that can be reverse engineered with a polynomial number of queries.  ... 
doi:10.1145/1456377.1456382 dblp:conf/ccs/BarrenoBCJNRST08 fatcat:4uk7kufh4zevfgxkvhz7t4qvm4

Vanilla Feature Distillation for Improving the Accuracy-Robustness Trade-Off in Adversarial Training [article]

Guodong Cao, Zhibo Wang, Xiaowei Dong, Zhifei Zhang, Hengchang Guo, Zhan Qin, Kui Ren
2022 arXiv   pre-print
Adversarial training has been widely explored for mitigating attacks against deep models.  ...  However, most existing works are still trapped in the dilemma between higher accuracy and stronger robustness since they tend to fit a model towards robust features (not easily tampered with by adversaries  ...  Helper-based adversarial training (HAT) [25] and Learnable Boundary Guided Adversarial Training (LBGAT) [7] achieved a better trade-off with the knowledge transferred from third-party models. [5]  ... 
arXiv:2206.02158v1 fatcat:gu7bak35h5hnriche6zabzdbkm
« Previous Showing results 1 — 15 out of 1,085 results