A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Explainability and Adversarial Robustness for RNNs
[article]
2020
arXiv
pre-print
notion of adversarial robustness, and show that an adversarial training procedure can significantly and successfully reduce the attack surface. ...
To understand a classifier's potential for misclassification, we extend existing explainability techniques and propose new ones, suitable particularly for sequential data. ...
ACKNOWLEDGEMENTS The Titan Xp used for this research was donated by the NVIDIA Corporation. ...
arXiv:1912.09855v2
fatcat:f2h5qoaftneqtjolx6jep3gwom
Adversarially Robust and Explainable Model Compression with On-Device Personalization for Text Classification
[article]
2021
arXiv
pre-print
Here we attempt to tackle these challenges by designing a new training scheme for model compression and adversarial robustness, including the optimization of an explainable feature mapping objective, a ...
knowledge distillation objective, and an adversarially robustness objective. ...
CONCLUSIONS AND FUTURE WORK In this work, we design a new training scheme for model compression ensuring adversarial robustness, explainability, and personalization for NLP applications. ...
arXiv:2101.05624v3
fatcat:dlptkeqlvzfeth36o2abf6rtj4
Analyzing the Robustness of Fake-news Detectors under Black-box Adversarial Attacks
2021
IEEE Access
In particular, we investigate the robustness of four different DL architectural choices-multilayer perceptron (MLP), convolutional neural network (CNN), recurrent neural network (RNN) and a recently proposed ...
Our experiments suggest that RNNs are robust as compared to other architectures. Further, we show that increasing the input sequence length generally increases the detector's robustness. ...
In order to further explain and analyze the adversarial example phenomena, we use LIME to generate explanations for the decisions made by the state-of-the-art Hybrid CNN-RNN detector. ...
doi:10.1109/access.2021.3085875
fatcat:gj2zekbhvjh65pabshzwqbhkfq
Enhancing Recurrent Neural Networks with Sememes
[article]
2019
arXiv
pre-print
Moreover, we find the sememe-incorporated models have great robustness and outperform adversarial training in defending adversarial attack. ...
For evaluation, we use several benchmark datasets involving PTB and WikiText-2 for language modeling, SNLI for natural language inference. ...
Furthermore, we conduct an adversarial attack experiment, finding the sememeincorporated RNNs display great robustness and perform much better than adversarial training. ...
arXiv:1910.08910v1
fatcat:gribdnwrlffaxbpyhsgegnhofm
Adversarial recovery of agent rewards from latent spaces of the limit order book
[article]
2019
arXiv
pre-print
rewards robust to variations in the underlying dynamics, and transfer them to new regimes of the original environment. ...
Recent advances in adversarial learning have allowed extending inverse RL to applications with non-stationary environment dynamics unknown to the agents, arbitrary structures of reward functions and improved ...
The adversarial learning methods considered will observe the trajectories in D to infer a rewardr (that yields a policyπ) to explain D. ...
arXiv:1912.04242v1
fatcat:zfbo3bhxc5akzjvdgisb2ib3f4
Coverage Guided Testing for Recurrent Neural Networks
[article]
2020
arXiv
pre-print
Experiments confirm that there is a positive correlation between adversary rate and coverage rate, evidence showing that the test metrics are valid indicators of robustness evaluation. ...
This paper develops a coverage-guided testing approach for a major class of RNNs -- long short-term memory networks (LSTMs). ...
., a robustness evaluation based on structural test metrics
may not be a good indicator of the actual robustness of
CNNs, stands in the context of RNNs and our test metrics. ...
arXiv:1911.01952v2
fatcat:hm6emtrxs5bbraa7qs7jn66idy
On the Robustness of Semantic Segmentation Models to Adversarial Attacks
2018
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Our observations will aid future efforts in understanding and defending against adversarial examples. ...
Furthermore, we show how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses. ...
We thank Qizhu Li and Bernardino Romera-Paredes for valuable input. ...
doi:10.1109/cvpr.2018.00099
dblp:conf/cvpr/ArnabMT18
fatcat:jflpkjnihbd5toazufxb36cbpq
Survey for Trust-aware Recommender Systems: A Deep Learning Perspective
[article]
2020
arXiv
pre-print
filter untruthful noises (e.g., spammers and fake information) or enhance attack resistance; explainable recommender systems that provide explanations of recommended items. ...
A significant remaining challenge for existing recommender systems is that users may not trust the recommender systems for either lack of explanation or inaccurate recommendation results. ...
(Cong et al. [133]) Examples of generative RNNs for explainable recommendation. ...
arXiv:2004.03774v2
fatcat:q7mehir7hbbzpemw3q5fkby5ty
On the Robustness of Semantic Segmentation Models to Adversarial Attacks
[article]
2018
arXiv
pre-print
Moreover, in the shorter term, we show how to effectively benchmark robustness and show which segmentation models should currently be preferred in safety-critical applications due to their inherent robustness ...
Our observations will aid future efforts in understanding and defending against adversarial examples. ...
Also, explaining the effect of residual connections on adversarial robustness remains an open research question. ...
arXiv:1711.09856v3
fatcat:dnhjhzi745cpnhbq5hq4jywomy
3D-A-Nets: 3D Deep Dense Descriptor for Volumetric Shapes with Adversarial Networks
[article]
2017
arXiv
pre-print
that jointly train a set of convolution neural network (CNN), recurrent neural network (RNN) and an adversarial discriminator. ...
We developed new definition of 2D multilayer dense representation (MDR) of 3D volumetric data to extract concise but geometrically informative shape description and a novel design of adversarial networks ...
CNN), recurrent neural network (RNN) and an adversarial discriminator for the robust 3D-DDSD for volumetric shapes. ...
arXiv:1711.10108v1
fatcat:7kctixmsuffq5avgdrfeyw6sr4
Speaker Identification for Household Scenarios with Self-Attention and Adversarial Training
2020
Interspeech 2020
To distill informative global acoustic embedding representations from utterances and be robust to adversarial perturbations, we propose a Self-Attentive Adversarial Speaker-Identification method (SAASI ...
Given a closed set of users, with a few short registered voice utterances for each user as enrollment, and another short test ut- ...
The adversarial training helps generalize the model and makes it more robust against sample noise, and helps presumably especially for new speakers. ...
doi:10.21437/interspeech.2020-3025
dblp:conf/interspeech/LiJWHS20
fatcat:7kckubsocvdlnhcraovlhj55om
An Adversarial Approach for Intrusion Detection Systems Using Jacobian Saliency Map Attacks (JSMA) Algorithm
2020
Computers
F1 score and training epochs. ...
With the aim to understand the impact of such attacks, in this paper, we have proposed a novel random neural network-based adversarial intrusion detection system (RNN-ADV). ...
The methodology for Adversarial attack crafting using Jacobian Saliency map Attacks (JSMA) algorithm is explained in Section 3. ...
doi:10.3390/computers9030058
fatcat:3wx63mpwxnhsjaqxcpn5xwromi
Adversarial Machine Learning in Text Processing: A Literature Survey
2022
IEEE Access
In this paper, we surveyed major subjects in adversarial machine learning for text processing applications. ...
This usage will allow for a seamless lexical and grammatical transition between various writing styles. ...
Below we explain several adversarial training techniques and showcase their strengths and weaknesses.
A. ...
doi:10.1109/access.2022.3146405
fatcat:emahpmjqmnbjpbhptrrtrjlja4
Training Recurrent Neural Network through Moment Matching for NLP Applications
2018
Interspeech 2018
Recurrent neural network (RNN) is conventionally trained in the supervised mode but used in the free-running mode for inferences on testing samples. ...
Our MM-RNN shows significant performance improvements over existing approaches when tested on practical NLP applications including logic form generation and image captioning. ...
Then, we detailed how to incorporate the moment matching training strategy and its kernel tricks [15] into RNN for robust and efficient training. ...
doi:10.21437/interspeech.2018-1369
dblp:conf/interspeech/DengSCJ18
fatcat:lvw3w2s22bgnxloeyoxuctsufi
RNN-Test: Towards Adversarial Testing for Recurrent Neural Network Systems
[article]
2021
arXiv
pre-print
Finally, RNN-Test solves the joint optimization problem to maximize state inconsistency and state coverage, and crafts adversarial inputs for various tasks of different kinds of inputs. ...
While massive efforts have been investigated in adversarial testing of convolutional neural networks (CNN), testing for recurrent neural networks (RNN) is still limited and leaves threats for vast sequential ...
The work [24] first explains the definition of adversarial inputs for RNNs with categorical outputs and sequential outputs, but just presents rough qualitative descriptions that adversarial inputs could ...
arXiv:1911.06155v2
fatcat:qg6sdc4hfzcsrct5lo5ja6rxcq
« Previous
Showing results 1 — 15 out of 4,089 results