2,407 Hits in 5.7 sec

Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks [article]

Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, Fabio Roli
2019 arXiv   pre-print
We highlight two main factors contributing to attack transferability: the intrinsic adversarial vulnerability of the target model, and the complexity of the surrogate model used to optimize the attack.  ...  Empirical evidence for transferability has been shown in previous work, but the underlying reasons why an attack transfers or not are not yet well understood.  ...  ., due to high-dimensionality of the input space and low level of regularization), for an attack to succeed it suffices to apply only tiny, imperceptible perturbations.  ... 
arXiv:1809.02861v4 fatcat:mndeutlpmjdwrghudt54spnj5q

Defending Distributed Classifiers Against Data Poisoning Attacks [article]

Sandamal Weerasinghe, Tansu Alpcan, Sarah M. Erfani, Christopher Leckie
2020 arXiv   pre-print
Local Intrinsic Dimensionality (LID) is a promising metric that characterizes the outlierness of data samples.  ...  In this work, we introduce a new approximation of LID called K-LID that uses kernel distance in the LID calculation, which allows LID to be calculated in high dimensional transformed spaces.  ...  Recent evidence suggests a connection between the adversarial vulnerability of learning and the intrinsic dimensionality of the data [9] , [10] .  ... 
arXiv:2008.09284v1 fatcat:2ohufvjpwbbtnfvr2puaoqvaa4

Models and Framework for Adversarial Attacks on Complex Adaptive Systems [article]

Vahid Behzadan, Arslan Munir
2017 arXiv   pre-print
To facilitate the analysis of such attacks, we present multiple approaches to the modeling of CAS as dynamical, data-driven, and game-theoretic systems, and develop quantitative definitions of attack,  ...  We introduce the paradigm of adversarial attacks that target the dynamics of Complex Adaptive Systems (CAS).  ...  Achieving these objectives in large-scale CAS will require extending the models of dynamics established in Section III into tractable models that are better-suited for analysis of high-dimensional nonlinear  ... 
arXiv:1709.04137v1 fatcat:risynvwcrffbddmogtwg5cmcli

Advances in adversarial attacks and defenses in computer vision: A survey [article]

Naveed Akhtar, Ajmal Mian, Navid Kardan, Mubarak Shah
2021 arXiv   pre-print
In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018.  ...  However, it is now known that DL is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos.  ...  [305] claimed that adversarial examples span a contiguous high dimensional space.  ... 
arXiv:2108.00401v2 fatcat:23gw74oj6bblnpbpeacpg3hq5y

The Feasibility and Inevitability of Stealth Attacks [article]

Ivan Y. Tyukin, Desmond J. Higham, Eliyas Woldegeorgis, Alexander N. Gorban
2021 arXiv   pre-print
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.  ...  Building on work by [Tyukin et al., International Joint Conference on Neural Networks, 2020], we develop a range of new implementable attack strategies with accompanying analysis, showing that with high  ...  Another alternative is to employ dimensionality reduction approaches facilitating lower-dimensional layer widths during and after the training. Constraining attack accuracy.  ... 
arXiv:2106.13997v2 fatcat:7ctfaw66czghxl35fpi6ovm7ie

Automated Poisoning Attacks and Defenses in Malware Detection Systems: An Adversarial Machine Learning Approach [article]

Sen Chen, Minhui Xue, Lingling Fan, Shuang Hao, Lihua Xu, Haojin Zhu, Bo Li
2017 arXiv   pre-print
Today, sophisticated attackers can adapt by maximally sabotaging machine-learning classifiers via polluting training data, rendering most recent machine learning-based malware detection tools (such as  ...  To tackle the problem, we propose KuafuDet, a two-phase learning enhancing approach that learns mobile malware by adversarial detection.  ...  Through simulation, we presented practical bounds for the accuracy loss to each attacker.  ... 
arXiv:1706.04146v3 fatcat:f7yzifuahff6dfyaihlrnn3gfa

Econometric Evidence in EU Competition Law: An Empirical and Theoretical Analysis

Ioannis Lianos, Christos Genakos
2012 Social Science Research Network  
high barriers to entry, the nature of the product (homogeneous and standardized etc), and economic theory on the role of facilitating practices.  ...  The -intrinsic quality‖ of empirical evidence, such as econometrics, depends on the reliance and reliability of the underlying data, while that of its theoretical counterparts, the economic theory which  ... 
doi:10.2139/ssrn.2184563 fatcat:4feqmbksvfaqti2hggeiycutne

Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation [article]

Jiawei Zhang and Linyi Li and Huichen Li and Xiaolu Zhang and Shuang Yang and Bo Li
2021 arXiv   pre-print
However, the query efficiency of it is in general high especially for high dimensional image data.  ...  Based on our theoretical framework, we propose Progressive-Scale enabled projective Boundary Attack (PSBA) to improve the query efficiency via progressive scaling techniques.  ...  Progressive-Scale Blackbox Attack via Projective Gradient Estimation  ... 
arXiv:2106.06056v1 fatcat:oemiaeophvc47bjfgwn2tijonm

Graph Embedding for Recommendation against Attribute Inference Attacks

Shijie Zhang, Hongzhi Yin, Tong Chen, Zi Huang, Lizhen Cui, Xiangliang Zhang
2021 Proceedings of the Web Conference 2021  
The key idea is to facilitate adversarial learning with an RNN-based private attribute inference attacker and a CF-based recommender.  ...  Evaluation Protocols Attribute Inference Attack Resistance. To evaluate all models' robustness against attribute inference attacks, we first build a strong adversary classifier (i.e., attacker).  ... 
doi:10.1145/3442381.3449813 fatcat:sbdyykzmrfho3c36g7o6gy6o6y

Graph Embedding for Recommendation against Attribute Inference Attacks [article]

Shijie Zhang, Hongzhi Yin, Tong Chen, Zi Huang, Lizhen Cui, Xiangliang Zhang
2021 arXiv   pre-print
However, little attention has been paid to developing recommender systems that can defend such attribute inference attacks, and existing works achieve attack resistance by either sacrificing considerable  ...  Apart from the leakage of raw user data, the fragility of current recommenders under inference attacks offers malicious attackers a backdoor to estimate users' private attributes via their behavioral footprints  ...  The key idea is to facilitate adversarial learning with an RNN-based private attribute inference attacker and a CF-based recommender.  ... 
arXiv:2101.12549v1 fatcat:mae3fmklcjc3dk32y6uyyvju7q

Poisoning Attacks and Defenses on Artificial Intelligence: A Survey [article]

Miguel A. Ramirez, Song-Kyoo Kim, Hussam Al Hamadi, Ernesto Damiani, Young-Ji Byon, Tae-Yeon Kim, Chung-Suk Cho, Chan Yeob Yeun
2022 arXiv   pre-print
Moreover, this paper emphasizes the underlying assumptions and limitations considered by both attackers and defenders along with their intrinsic properties such as: availability, reliability, privacy,  ...  This work compiles the most relevant insights and findings found in the latest existing literatures addressing this type of attacks.  ...  This work introduces K-LID, a new approximation of Local Intrinsic Dimensionality (LID), metric associated to the outliners in data samples.  ... 
arXiv:2202.10276v2 fatcat:sjx7bvyoh5fj3ivvoglrbc6ipq

A survey on Adversarial Recommender Systems: from Attack/Defense strategies to Generative Adversarial Networks [article]

Yashar Deldjoo and Tommaso Di Noia and Felice Antonio Merra
2020 arXiv   pre-print
successful application of AML in generative adversarial networks (GANs) for generative applications, thanks to their ability for learning (high-dimensional) data distributions.  ...  The goal of this survey is two-fold: (i) to present recent advances on adversarial machine learning (AML) for the security of RS (i.e., attacking and defense recommendation models), (ii) to show another  ...  The adversarial learning scheme -or the min-max game -which lies in the heart of GANs empowers these ML models with phenomenal capabilities such as the ability to model high-dimensional distributions.  ... 
arXiv:2005.10322v2 fatcat:4wqcluqgnbbwpkicunn42et5te

A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability

Xiaowei Huang, Daniel Kroening, Wenjie Ruan, James Sharp, Youcheng Sun, Emese Thamo, Min Wu, Xinping Yi
2020 Computer Science Review  
Dimensionality Reduction Some researchers propose to defense adversarial attacks by dimension reduction.  ...  ., 2018e] uses Local Intrinsic Dimensionality (LID) [Houle, 2017] to measure the adversarial region by considering the local distance distribution from a reference point to its neighbours.  ... 
doi:10.1016/j.cosrev.2020.100270 fatcat:biji56htvnglfhl7n3jnuelu2i

Robust Anomaly Detection and Backdoor Attack Detection Via Differential Privacy [article]

Min Du, Ruoxi Jia, Dawn Song
2019 arXiv   pre-print
added by attackers.  ...  detection, novelty detection, and backdoor attack detection.  ...  A successful backdoor attack should have high image classification accuracy on CLEAN-test, which we refer to as benign accuracy, as well as high accuracy on POISONED-test with poisoned labels, which indicates  ... 
arXiv:1911.07116v1 fatcat:wc3tlhjkpvautaubrcjxpvb4xe

The Dilemma Between Data Transformations and Adversarial Robustness for Time Series Application Systems [article]

Sheila Alemany, Niki Pissinou
2021 arXiv   pre-print
Theoretical evidence has shown that the high intrinsic dimensionality of datasets facilitates an adversary's ability to develop effective adversarial examples in classification models.  ...  Adversarial examples, or nearly indistinguishable inputs created by an attacker, significantly reduce machine learning accuracy.  ...  Positive impacts by dimensionality reduction techniques are only presented where the technique embeds the high-dimensional input space into a lower-dimensional structure that approaches the intrinsic dimension  ... 
arXiv:2006.10885v2 fatcat:i5zxkx3fcbeqnel42pfwjlr6aa
« Previous Showing results 1 — 15 out of 2,407 results