Filters








3,026 Hits in 2.7 sec

Stochastic sparse adversarial attacks [article]

Manon Césaire, Lucas Schott, Hatem Hajri, Sylvain Lamprier, Patrick Gallinari
2022 arXiv   pre-print
This paper introduces stochastic sparse adversarial attacks (SSAA), standing as simple, fast and purely noise-based targeted and untargeted attacks of neural network classifiers (NNC).  ...  SSAA offer new examples of sparse (or L_0) attacks for which only few methods have been proposed previously.  ...  We would like to thank Théo Combey for his help in Tensorflow simulations of the attacks.  ... 
arXiv:2011.12423v4 fatcat:2gdd7tekbjfolby4z3biqe6q2y

Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial Robustness [article]

Konstantinos P. Panousis, Sotirios Chatzis, Sergios Theodoridis
2021 arXiv   pre-print
This work explores the potency of stochastic competition-based activations, namely Stochastic Local Winner-Takes-All (LWTA), against powerful (gradient-based) white-box and black-box adversarial attacks  ...  As we experimentally show, the arising networks yield state-of-the-art robustness against powerful adversarial attacks while retaining very high classification rate in the benign case.  ...  This is a key aspect that stochastically alters the information flow in the network and obstructs an adversary from attacking the model.  ... 
arXiv:2112.02671v1 fatcat:c3olbg5tz5ajdg2csvf4oz6twu

Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial Robustness

Konstantinos Panagiotis Panousis, Sotirios Chatzis, Sergios Theodoridis
2021 Zenodo  
attacks; we especially focus on Adversarial Training settings.  ...  Abstract: This work explores the potency of stochastic competition-based activations, namely Stochastic Local Winner-Takes-All (LWTA), against powerful (gradient-based) white-box and black-box adversarial  ...  adversarial attacks in the context of PGD-based Adversarial Training (AT) (Madry et al., 2017) .  ... 
doi:10.5281/zenodo.5810556 fatcat:2dp3ny37eved7a5jjwlouykqhi

Stochastic Local Winner-Takes-All Networks Enable Profound Adversarial Robustness

Konstantinos Panousis, Sotirios Chatzis, Sergios Theodoridis
2021 Zenodo  
This work explores the potency of stochastic competition-based activations, namely Stochastic Local Winner-Takes-All (LWTA), against powerful (gradient-based) white-box and black-box adversarial attacks  ...  As we experimentally show, the arising networks yield state-of-the-art robustness against powerful adversarial attacks while retaining very high classification rate in the benign case.  ...  adversarial attacks in the context of PGD-based Adversarial Training (AT) (Madry et al., 2017) .  ... 
doi:10.5281/zenodo.6000328 fatcat:oltpshwt2jb4rm5w5wmzhuhnum

Training Adversarially Robust Sparse Networks via Bayesian Connectivity Sampling

Ozan Özdenizci, Robert Legenstein
2021 International Conference on Machine Learning  
Deep neural networks have been shown to be susceptible to adversarial attacks.  ...  Hence, if adversarial robustness is an issue, training of sparsely connected networks necessitates considering adversarially robust sparse learning.  ...  There has been a growing interest in tackling the problem of achieving robustness against adversarial attacks with very sparsely connected neural networks (cf. Section 2).  ... 
dblp:conf/icml/OzdenizciL21 fatcat:y4zf3joapvelznhuftqy5euwl4

Neuro-Inspired Deep Neural Networks with Sparse, Strong Activations [article]

Metehan Cekic, Can Bakiskan, Upamanyu Madhow
2022 arXiv   pre-print
perturbations (without adversarial training).  ...  We use standard stochastic gradient training, supplementing the end-to-end discriminative cost function with layer-wise costs promoting Hebbian ("fire together," "wire together") updates for highly active  ...  In contrast to the iterative sparse coding and dictionary learning in such an approach, our HaH-based training targets strong sparse activations in a manner amenable to standard stochastic gradient training  ... 
arXiv:2202.13074v3 fatcat:sfr3wdfu4ncrffviaebp5rapdm

Local Competition and Stochasticity for Adversarial Robustness in Deep Learning [article]

Konstantinos P. Panousis and Sotirios Chatzis and Antonios Alexos and Sergios Theodoridis
2021 arXiv   pre-print
As we show, our method achieves high robustness to adversarial perturbations, with state-of-the-art performance in powerful adversarial attack schemes.  ...  This work addresses adversarial robustness in deep learning by considering deep networks with stochastic local winner-takes-all (LWTA) activations.  ...  The experimental results suggest that the stochastic nature of the proposed activation significantly contributes to the robustness of the model; it yields significant gains in all adversarial attacks compared  ... 
arXiv:2101.01121v2 fatcat:rszbg7ckffavdfr5xf7gi2q5cu

LCANets: Lateral Competition Improves Robustness Against Corruption and Attack

Michael Teti, Garrett T. Kenyon, Ben Migliori, Juston Moore
2022 International Conference on Machine Learning  
We also perform the first adversarial attacks with full knowledge of a sparse coding CNN layer by attacking LCANets with white-box and black-box attacks, and we show that, contrary to previous hypotheses  ...  , sparse coding layers are not very robust to white-box attacks.  ...  By performing the first direct adversarial attacks on a sparse coding CNN layer, we observe that sparse CNN layers are not as robust to adversarial attacks as previously thought.  ... 
dblp:conf/icml/TetiKMM22 fatcat:wzvg5jipy5eopfemyqrokwodpi

Adversarial Skill Learning for Robust Manipulation [article]

Pingcheng Jian, Chao Yang, Di Guo, Huaping Liu, Fuchun Sun
2020 arXiv   pre-print
To improve the robustness of the policy, we introduce the adversarial training mechanism to the robotic manipulation tasks in this paper, and an adversarial skill learning algorithm based on soft actor-critic  ...  In the case of stochastic policy, we extend the standard soft actor-critic approach to our adversarial version, called Adv-SAC.  ...  Furthermore, for sparse reward tasks, the goal-condition data augmentation method HER [23] has been introduced to our adversarial training process, enabling our method to work well even in sparse reward  ... 
arXiv:2011.03383v1 fatcat:r6scinjbhbfvdfl2fcgq5ufcge

Local Competition and Stochasticity for Adversarial Robustness in Deep Learning

Konstantinos Panagiotis Panousis, Sotirios Chatzis, Antonios Alexos, Sergios Theodoridis
2021 Zenodo  
As we show, our method achieves high robustness to adversarial perturbations, with state-of-the-art performance in powerful adversarial attack schemes.  ...  Abstract: This work addresses adversarial robustness in deep learning by considering deep networks with stochastic local winner-takes-all (LWTA) activations.  ...  Conclusions This work attacked adversarial robustness in deep learning.  ... 
doi:10.5281/zenodo.5498188 fatcat:yfxq4nwdj5d2ned7glrz5oko4y

HASI: Hardware-Accelerated Stochastic Inference, A Defense Against Adversarial Machine Learning Attacks [article]

Mohammad Hossein Samavatian, Saikat Majumdar, Kristin Barber, Radu Teodorescu
2021 arXiv   pre-print
This paper presents HASI, a hardware-accelerated defense that uses a process we call stochastic inference to detect adversarial inputs.  ...  Unfortunately, DNNs are known to be vulnerable to so-called adversarial attacks that manipulate inputs to cause incorrect results that can be beneficial to an attacker or damaging to the victim.  ...  This paper presents HASI, a hardware/software co-designed defense that relies on a novel stochastic inference process to effectively defend against state-of-the art adversarial attacks.  ... 
arXiv:2106.05825v3 fatcat:xr2nxttx5nf23f7uh7chhfnk5u

Efficient and Robust Classification for Sparse Attacks [article]

Mark Beliaev, Payam Delgosha, Hamed Hassani, Ramtin Pedarsani
2022 arXiv   pre-print
To this end, we propose a novel defense method that consists of "truncation" and "adversarial training".  ...  In this paper, we consider perturbations bounded by the ℓ_0–norm, which have been shown as effective attacks in the domains of image-recognition, natural language processing, and malware-detection.  ...  Utilizing the state-of-the-art sparse attack of sparse-rs [29] as well as the commonly used Pointwise Attack [28] , we show that while adversarial training alone fails in robustifying against 0 -attacks  ... 
arXiv:2201.09369v1 fatcat:r47xluq76vcqfndoe42v2zo7xe

A Generative Model based Adversarial Security of Deep Learning and Linear Classifier Models [article]

erhat Ozgur Catak and Samed Sivaslioglu and Kevser Sahinbas
2020 arXiv   pre-print
The main idea behind adversarial attacks against machine learning models is to produce erroneous results by manipulating trained models.  ...  In this paper, we have proposed a mitigation method for adversarial attacks against machine learning models with an autoencoder model that is one of the generative ones.  ...  Denoising autoencoders apply corrupted data through stochastic mapping. Our input is x and corrupted data is x and stochastic mapping is x ∼ q D ( x|x).  ... 
arXiv:2010.08546v1 fatcat:trqowc5b5jbnvaqvgafyiui76m

An Adaptive Empirical Bayesian Method for Sparse Deep Learning

Wei Deng, Xiao Zhang, Faming Liang, Guang Lin
2019 Advances in Neural Information Processing Systems  
The proposed method also improves resistance to adversarial attacks.  ...  using stochastic approximation (SA).  ...  As the degree of adversarial attacks arises, the images become vaguer as shown in Fig.2 (a) and Fig.2(c) .  ... 
pmid:33244209 pmcid:PMC7687285 fatcat:vxhhhiq32zd7tecr6iio4s5tme

AutoAdversary: A Pixel Pruning Method for Sparse Adversarial Attack [article]

Jinqiao Li, Xiaotao Liu, Jian Zhao, Furao Shen
2022 arXiv   pre-print
However, many existing sparse adversarial attacks use heuristic methods to select the pixels to be perturbed, and regard the pixel selection and the adversarial attack as two separate steps.  ...  the pixel selection into the adversarial attack.  ...  According to the inherent similarity between neural network pruning and sparse adversarial attack, we propose an end-to-end sparse adversarial attack method which utilize attack to guide the automatic  ... 
arXiv:2203.09756v1 fatcat:nmacljf4dvhjvlfmc4e3lk7ngi
« Previous Showing results 1 — 15 out of 3,026 results