A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Gradient-based adversarial attacks on categorical sequence models via traversing an embedded world
[article]
2020
arXiv
pre-print
The literature mostly considers adversarial attacks on models with images and other structured inputs. However, the adversarial attacks for categorical sequences can also be harmful. ...
Successful attacks for inputs in the form of categorical sequences should address the following challenges: (1) non-differentiability of the target function, (2) constraints on transformations of initial ...
Another idea is to move into an embedded space and leverage on gradients-based approaches in this space [11] . We also note that most of these works focus on text sequence data. ...
arXiv:2003.04173v3
fatcat:4zds5zbxzzaqthax5f7quvaazi
Attackability Characterization of Adversarial Evasion Attack on Discrete Data
2020
Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining
Based on our attackability analysis, we propose a computationally efficient orthogonal matching pursuit-guided attack method for evasion attack on discrete data. ...
Substantial experimental results on real-world datasets validate the proposed attackability conditions and the effectiveness of the proposed attack method. ...
Rather than only cheating the classifier, an adversarial attacker is more keen to gain simultaneously the task-specific knowledge of the defined features via the attack process. ...
doi:10.1145/3394486.3403194
fatcat:u7gh7otodffkvhxcoe75rw67ki
A Survey on Adversarial Attacks for Malware Analysis
[article]
2022
arXiv
pre-print
Work will provide a taxonomy of adversarial evasion attacks on the basis of attack domain and adversarial generation techniques. ...
Increasing dependency on data has paved the blueprint for ever-high incentives to camouflage machine learning models. ...
We taxonomize the adversarial evasion world of malware based on attack domain and approach taken to realize adversarial attack. ...
arXiv:2111.08223v2
fatcat:fiw3pgunsvb2vo7uv72mp6b65a
HufuNet: Embedding the Left Piece as Watermark and Keeping the Right Piece for Ownership Verification in Deep Neural Networks
[article]
2021
arXiv
pre-print
to protect large-scale DNN models in the real-world. ...
They also suffer from fraudulent ownership claim as attackers can discover adversarial samples and use them as secret inputs to trigger distinguishable behaviors from stolen models. ...
Black-box watermarking approaches can further be categorized into two classes, blind [25, 29] approaches and non-blind approaches [1, 14, 32, 51] , depending on whether or not an embedded watermark ...
arXiv:2103.13628v1
fatcat:z7sl7g437jdpfnhmybyd7dwzea
A Survey of Adversarial Machine Learning in Cyber Warfare
2018
Defence Science Journal
We explore the threat models for Machine Learning systems and describe the various techniques to attack and defend them. ...
This paper presents a comprehensive survey of this emerging area and the various techniques of adversary modelling. ...
Adversary attacks can be classified into black box attacks and white box attacks based on the knowledge about the model an adversary has. ...
doi:10.14429/dsj.68.12371
fatcat:vyupcxe6hrhllb4rowequxrf5i
Semantically Adversarial Driving Scenario Generation with Explicit Knowledge Integration
[article]
2022
arXiv
pre-print
Generating adversarial scenarios, which have the potential to fail autonomous driving systems, provides an effective way to improve the robustness. ...
Extending purely data-driven generative models, recent specialized models satisfy additional controllable requirements such as embedding a traffic sign in a driving scene by manipulating patterns implicitly ...
-We propose a tree-structured generative model based on our knowledge categorization and construct a synthetic example to demonstrate the effectiveness of our knowledge integration. ...
arXiv:2106.04066v5
fatcat:65gtwmtio5awlcrie4hny5lune
Attacking Graph-based Classification via Manipulating the Graph Structure
[article]
2019
arXiv
pre-print
Existing adversarial machine learning studies mainly focused on machine learning for non-graph data. Only a few recent studies touched adversarial graph-based classification methods. ...
We formulate our attack as a graph-based optimization problem, solving which produces the edges that an attacker needs to manipulate to achieve its attack goal. ...
Therefore, it is harder to optimize the adversarial matrix, because the gradient of the adversarial matrix also depends on the edge weights, which are implicit functions of the adversarial matrix. ...
arXiv:1903.00553v2
fatcat:fkkvxas3andhpknrqrx4bhpthm
Robust Neural Malware Detection Models for Emulation Sequence Learning
[article]
2018
arXiv
pre-print
We present an implementation of the Convoluted Partitioning of Long Sequences approach in order to tackle this vulnerability and operate on long sequences. ...
We present specialized models that can handle extremely long sequences while successfully performing malware detection in an efficient way. ...
[24] recently proposed an adversarial attack for recurrent neural networks. Recurrent models are more challenging to attack than deep neural networks due their recurrent nature. ...
arXiv:1806.10741v1
fatcat:2u5mcgjbvzgi3dnxajajfm2euq
Robust Neural Malware Detection Models for Emulation Sequence Learning
2018
MILCOM 2018 - 2018 IEEE Military Communications Conference (MILCOM)
We present an implementation of the Convoluted Partitioning of Long Sequences approach in order to tackle this vulnerability and operate on long sequences. ...
We present specialized models that can handle extremely long sequences while successfully performing malware detection in an efficient way. ...
Thus, defenses against adversarial attacks directed at recurrent models, such as those proposed in this paper, are an open research topic. ...
doi:10.1109/milcom.2018.8599785
dblp:conf/milcom/AgrawalSMS18
fatcat:wd3lvseq7zhrdalvilchewxjha
Deep Neural Mobile Networking
[article]
2020
arXiv
pre-print
In particular, deep learning based solutions can automatically extract features from raw data, without human expertise. ...
This thesis attacks important problems in the mobile networking area from various perspectives by harnessing recent advances in deep neural networks. ...
OPT-ATTACK [341] projects the decision-based attack into a continuous optimization problem and solves it via randomized zeroth-order gradient update. ...
arXiv:2011.05267v1
fatcat:yz2zp5hplzfy7h5kptmho7mbhe
Graph Neural Networks: Taxonomy, Advances and Trends
[article]
2022
arXiv
pre-print
Graph neural networks provide a powerful toolkit for embedding real-world graphs into low-dimensional spaces according to specific tasks. Up to now, there have been several surveys on this topic. ...
This survey aims to overcome this limitation, and provide a comprehensive review on the graph neural networks. ...
The Graph2Seq [106] is a general end-toend graph-to-sequence neural encoder-decoder model converting an input graph to a sequence of vectors with the attention based LSTM model. ...
arXiv:2012.08752v3
fatcat:xj2kambrabfj3g5ldenfyixzu4
Deep Learning in Information Security
[article]
2018
arXiv
pre-print
Based on an analysis of our reviewed papers, we point out shortcomings of DL-methods to those requirements and discuss further research opportunities. ...
If DL-methods succeed to solve problems on a data type in one domain, they most likely will also succeed on similar data from another domain. ...
Then, this
AST is traversed depth-first to create a sequence of nodes. Each node is embedded via an embedding layer, and a
model is learned from such sequences of nodes via a bi-directional LSTM. ...
arXiv:1809.04332v1
fatcat:xfb7lgrkw5cirdl3qvmg3ssnbi
Federated Deep Learning for Cyber Security in the Internet of Things: Concepts, Applications, and Experimental Analysis
2021
IEEE Access
These three methods are generally categorized as adversarial attacking methods.
VI. ...
their local model updates via differential attacks Wang et al ...
He has been conducting several research projects with international collaborations on these topics. He was a recipient of the 2021 IEEE TEM BEST PAPER AWARD. ...
doi:10.1109/access.2021.3118642
fatcat:222fgsvt3nh6zcgm5qt4kxe7c4
Interpreting and Improving Adversarial Robustness of Deep Neural Networks with Neuron Sensitivity
[article]
2019
arXiv
pre-print
Based on that, we further propose to improve adversarial robustness by constraining the similarities of sensitive neurons between benign and adversarial examples which stabilizes the behaviors of sensitive ...
In this paper, we first draw the close connection between adversarial robustness and neuron sensitivities, as sensitive neurons make the most non-trivial contributions to model predictions in the adversarial ...
One important take-away is: adversarial training improves model robustness by embedding representation insensitivities. ...
arXiv:1909.06978v2
fatcat:2fndcygqvncpjcsxcjqf5kokqu
Machine Learning Based Cyber Attacks Targeting on Controlled Information: A Survey
[article]
2021
arXiv
pre-print
Recent publications are summarized to generalize an overarching attack methodology and to derive the limitations and future directions of ML-based stealing attacks. ...
The ML-based stealing attack is reviewed in perspectives of three categories of targeted controlled information, including controlled user activities, controlled ML model-related information, and controlled ...
Herein, the attack is based on an assumption that one model of the model set is trained with the learning algorithm used by the target model. ...
arXiv:2102.07969v1
fatcat:h4br22tpjre2lisc4zbzpy2iee
« Previous
Showing results 1 — 15 out of 288 results