Filters








245 Hits in 6.2 sec

Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks

Xin Zhao, Zeru Zhang, Zijie Zhang, Lingfei Wu, Jiayin Jin, Yang Zhou, Ruoming Jin, Dejing Dou, Da Yan
2021 International Conference on Machine Learning  
The combination of norm-constrained f and Wl leads to the 1-Lipschitz neural network for expressive and robust multiple graph learning.  ...  This paper proposes an attackagnostic graph-adaptive 1-Lipschitz neural network, ERNN, for improving the robustness of deep multiple graph learning while achieving remarkable expressive power.  ...  , i.e., derive lower and upper bounds of feasible K l for expressive and robust multiple graph learning against adversarial attacks.  ... 
dblp:conf/icml/ZhaoZZWJZJD021 fatcat:7thqnpakwvcm5emlwhxry2on4i

Adversarially Robust Neural Architectures [article]

Minjing Dong, Yanxi Li, Yunhe Wang, Chang Xu
2020 arXiv   pre-print
Deep Neural Network (DNN) are vulnerable to adversarial attack.  ...  This paper thus aims to improve the adversarial robustness of the network from the architecture perspective with NAS framework.  ...  Our proposed algorithm Adversarially Robust Neural Architecture Search with Confidence Learning (RACL) starts from the approximation of Lipschitz constant of entire neural network under NAS framework,  ... 
arXiv:2009.00902v1 fatcat:st4c7uv5cjbt7cum6u5mfis754

Wavelet Regularization Benefits Adversarial Training [article]

Jun Yan, Huilin Yin, Xiaoyang Deng, Ziming Zhao, Wancheng Ge, Hao Zhang, Gerhard Rigoll
2022 arXiv   pre-print
It verifies the assumption that our wavelet regularization method can enhance adversarial robustness especially in the deep wide neural networks.  ...  On the datasets of CIFAR-10 and CIFAR-100, our proposed Adversarial Wavelet Training method realizes considerable robustness under different types of attacks.  ...  Huan Hua for their help in our experimental work.  ... 
arXiv:2206.03727v1 fatcat:gn57ndlzujdp7dje2vssxjr5re

Structural Robustness for Deep Learning Architectures

Carlos Lassance, Vincent Gripon, Jian Tang, Antonio Ortega
2019 2019 IEEE Data Science Workshop (DSW)  
Unfortunately, they are susceptible to various types of noise, including adversarial attacks and corrupted inputs.  ...  In this work we introduce a formal definition of robustness which can be viewed as a localized Lipschitz constant of the network function, quantified in the domain of the data to be classified.  ...  But if F ∈ Robustα (r) various layers of the architecture and that of F , as expressed in the for some r this does not imply that F is α-Lipschitz: to illus- following proposition  ... 
doi:10.1109/dsw.2019.8755564 dblp:conf/dsw/LassanceGTO19 fatcat:pwjiss7hxjgsnjdvpeskld3od4

Robust Graph Neural Networks via Probabilistic Lipschitz Constraints [article]

Raghu Arghal, Eric Lei, Shirin Saeedi Bidokhti
2021 arXiv   pre-print
Graph neural networks (GNNs) have recently been demonstrated to perform well on a variety of network-based tasks such as decentralized control and resource allocation, and provide computationally efficient  ...  However, like many neural-network based systems, GNNs are susceptible to shifts and perturbations on their inputs, which can include both node attributes and graph structure.  ...  Introduction Graph neural networks (GNNs) have proven to be a powerful method for network-based learning tasks, achieving state-of-the-art performance in many applications such as epidemic spread prediction  ... 
arXiv:2112.07575v1 fatcat:rjt6hh2hnffwnepfou3dlzculq

Lipschitz regularized Deep Neural Networks generalize and are adversarially robust [article]

Chris Finlay, Jeff Calder, Bilal Abbasi, Adam Oberman
2019 arXiv   pre-print
In this work we study input gradient regularization of deep neural networks, and demonstrate that such regularization leads to generalization proofs and improved adversarial robustness.  ...  We demonstrate empirically that the regularized models are more robust, and that gradient norms of images can be used for attack detection.  ...  Even though modern deep learning is technically a parametric learning problem, the high degree of expressibility of deep neural networks effectively renders the problem nonparametric.  ... 
arXiv:1808.09540v4 fatcat:dzdm4ubljja4jntxju6xzyql5m

Adversarial Attacks and Defenses in Images, Graphs and Text: A Review [article]

Han Xu, Yao Ma, Haochen Liu, Debayan Deb, Hui Liu, Jiliang Tang, Anil K. Jain
2019 arXiv   pre-print
Deep neural networks (DNN) have achieved unprecedented success in numerous machine learning tasks in various domains.  ...  In this survey, we review the state of the art algorithms for generating adversarial examples and the countermeasures against adversarial examples, for the three popular data types, i.e., images, graphs  ...  In our review, we mainly discuss studies of adversarial examples for deep neural networks. A Conventional Machine Learning Models Table 1 . 1 Notations.  ... 
arXiv:1909.08072v2 fatcat:i3han24f3fdgpop45t4pmxcdtm

On Lipschitz Regularization of Convolutional Layers using Toeplitz Matrix Theory [article]

Alexandre Araujo, Benjamin Negrevergne, Yann Chevaleyre, Jamal Atif
2020 arXiv   pre-print
Lipschitz regularity is now established as a key property of modern deep learning with implications in training stability, generalization, robustness against adversarial examples, etc.  ...  This paper tackles the problem of Lipschitz regularization of Convolutional Neural Networks.  ...  We use our method to regularize the Lipschitz constant of neural networks for adversarial robustness and show that it offers a significant improvement over AT alone.  ... 
arXiv:2006.08391v2 fatcat:xa7ytmlkgres3dld2zd3mwueqy

Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [article]

Naveed Akhtar, Ajmal Mian
2018 arXiv   pre-print
Whereas deep neural networks have demonstrated phenomenal success (often beyond human capabilities) in solving complex problems, recent studies show that they are vulnerable to adversarial attacks in the  ...  We review the works that design adversarial attacks, analyze the existence of such attacks and propose defenses against them.  ...  [136] modified the output layer of a neural network to induce robustness against the adversarial attacks. Wang et al.  ... 
arXiv:1801.00553v3 fatcat:xfk7togp5bhxvbxtwox3sckqq4

Adversarial Attack for Uncertainty Estimation: Identifying Critical Regions in Neural Networks [article]

Ismail Alarab, Simant Prakoonwit
2021 arXiv   pre-print
In our approach, we sought to perform uncertainty estimation based on the idea of adversarial attack method.  ...  We propose a novel method to capture data points near decision boundary in neural network that are often referred to a specific type of uncertainty.  ...  Conclusion We have proposed a novel method (MC-AA) based on adversarial attacks to capture model uncertainty in binary classification tasks.  ... 
arXiv:2107.07618v1 fatcat:imebytp2zvdufgwaiunr6ylp6q

Pay attention to your loss: understanding misconceptions about 1-Lipschitz neural networks [article]

Louis Béthune, Thibaut Boissin, Mathieu Serrurier, Franck Mamalet, Corentin Friedrich, Alberto González-Sanz
2022 arXiv   pre-print
Next, we show that 1-Lipschitz neural networks generalize well under milder assumptions.  ...  In this paper we clarify the matter: when it comes to classification 1-Lipschitz neural networks enjoy several advantages over their unconstrained counterpart.  ...  We also thank Sébastien Gerchinovitz for critical proof checking, Jean-Michel Loubes for useful discussions, and Etienne de Montbrun, Thomas Fel and Antonin for read-checking.  ... 
arXiv:2104.05097v5 fatcat:nigc7slturhejbgeiugrrahbie

Integrated Defense for Resilient Graph Matching

Jiaxiang Ren, Zijie Zhang, Jiayin Jin, Xin Zhao, Sixing Wu, Yang Zhou, Yelong Shen, Tianshi Che, Ruoming Jin, Dejing Dou
2021 International Conference on Machine Learning  
Nevertheless, there is still a lack of a comprehensive solution for further enhancing the robustness of graph matching against adversarial attacks.  ...  We propose an integrated defense model, IDRGM, for resilient graph matching with two novel defense techniques to defend against the above two attacks simultaneously.  ...  Related Work Recent defense techniques on graph learning models against adversarial attacks can be broadly classified into two categories: adversarial defense and certifiable robustness.  ... 
dblp:conf/icml/RenZJZWZSCJD21 fatcat:hsruoovkjna7zkdpb3yvedt7sq

Lipschitz Bound Analysis of Neural Networks [article]

Sarosij Bose
2022 arXiv   pre-print
Lipschitz Bound Estimation is an effective method of regularizing deep neural networks to make them robust against adversarial attacks.  ...  In this paper, we highlight the significant gap in obtaining a non-trivial Lipschitz bound certificate for Convolutional Neural Networks (CNNs) and empirically support it with extensive graphical analysis  ...  INTRODUCTION Adversarial attacks [1] on neural networks was first demonstrated by Szegedy et. al. which showed that a fully connected neural network could be made to falsely classify MNIST [2] images  ... 
arXiv:2207.07232v1 fatcat:ejzn5xp4q5gwrj7cqg33flggqm

AdvRush: Searching for Adversarially Robust Neural Architectures [article]

Jisoo Mok, Byunggook Na, Hyeokjun Choe, Sungroh Yoon
2021 arXiv   pre-print
Current efforts to improve the robustness of neural networks against adversarial examples are focused on developing robust training methods, which update the weights of a neural network in a more robust  ...  We propose AdvRush, a novel adversarial robustness-aware neural architecture search algorithm, based upon a finding that independent of the training method, the intrinsic robustness of a neural network  ...  , a plethora of defense methods have been proposed to improve the robustness of neural networks against adversarial examples [36, 69, 75] .  ... 
arXiv:2108.01289v2 fatcat:im5prxvyknbk5hkcwdmyayerhy

Adversarial Attacks and Defenses in Deep Learning: from a Perspective of Cybersecurity

Shuai Zhou, Chi Liu, Dayong Ye, Tianqing Zhu, Wanlei Zhou, Philip S. Yu
2022 ACM Computing Surveys  
The outstanding performance of deep neural networks has promoted deep learning applications in a broad set of domains.  ...  Further, it is difficult to evaluate the real threat of adversarial attacks or the robustness of a deep learning model, as there are no standard evaluation methods.  ...  [132] studied the distillation methods for generating robust target neural networks.  ... 
doi:10.1145/3547330 fatcat:d3x3oitysvb73ado5kuaqakgtu
« Previous Showing results 1 — 15 out of 245 results