A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks
[article]
2022
arXiv
pre-print
To this end, we develop a simple yet effective framework to craft targeted transfer-based adversarial examples, applying a hierarchical generative network. ...
Transfer-based adversarial attacks can evaluate model robustness in the black-box setting. ...
Compared with other instance-agnostic attacks, our hierarchical partition mechanism can make conditional generative networks be capable of specifying any target class via a feasible number of models for ...
arXiv:2107.01809v2
fatcat:ymjqvyrfk5ha7kojpje7f3rt4a
Exploring Transferable and Robust Adversarial Perturbation Generation from the Perspective of Network Hierarchy
[article]
2021
arXiv
pre-print
In this paper, we explore effective mechanisms to boost both of them from the perspective of network hierarchy, where a typical network can be hierarchically divided into output stage, intermediate stage ...
and boost the robustness and transferability of the adversarial perturbations. ...
Our contributions are summarized as below: • We propose a transferable and robust adversarial perturbation generation (TRAP) method from the perspective of network hierarchy to boost the transferability ...
arXiv:2108.07033v1
fatcat:d4okvsdl45gq5kp5wto3yvfaiq
MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense
[article]
2019
arXiv
pre-print
Present attack methods can make state-of-the-art classification systems based on deep neural networks misclassify every adversarially modified test example. ...
In this paper, we draw inspiration from the fields of cybersecurity and multi-agent systems and propose to leverage the concept of Moving Target Defense (MTD) in designing a meta-defense for 'boosting' ...
Note that although there is a boost in overall accuracy against against adversarial examples generated using FGM, the other attacks (1) DF, which is generated in a very different manner compared to FGM ...
arXiv:1705.07213v3
fatcat:gdj4dkzoafcinhbbe5qwidfyie
Progressive Adversarial Networks for Fine-Grained Domain Adaptation
2020
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
The progressive learning is applied upon both category classification and domain alignment, boosting both the discriminability and the transferability of the fine-grained features. ...
This paper presents the Progressive Adversarial Networks (PAN) to align fine-grained categories across domains with a curriculum-based adversarial learning framework. ...
Acknowledgments This work was supported in part by the Natural Science Foundation of China (61772299, 71690231), and the MOE Strategic Research Project on Artificial Intelligence Algorithms for Big Data ...
doi:10.1109/cvpr42600.2020.00923
dblp:conf/cvpr/WangCWLW20
fatcat:eqdox4pysbbwxksucfhnjohuye
No-Reference Point Cloud Quality Assessment via Domain Adaptation
[article]
2022
arXiv
pre-print
In particular, we treat natural images as the source domain and point clouds as the target domain, and infer point cloud quality via unsupervised adversarial domain adaptation. ...
Leveraging the rich subjective scores of the natural images, we can quest the evaluation criteria of human perception via DNN and transfer the capability of prediction to 3D point clouds. ...
(b) A generative network is used to extract hierarchical features from both source and target domains. ...
arXiv:2112.02851v2
fatcat:dhekexceqvcwpn6xda4acu5hhy
Hierarchical Knowledge Squeezed Adversarial Network Compression
2020
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
Recently, more focuses have been transferred to employ the adversarial training to minimize the discrepancy between distributions of output from two networks. ...
adversarial training framework to learn the student network. ...
Figure 1 : 1 The overview of proposed Hierarchical Knowledge Squeezed Adversarial Network Compression (HK-SANC) via intermediate supervision. ...
doi:10.1609/aaai.v34i07.6799
fatcat:cht7mlfwrnebbivrufrc4zpp5u
Unsupervised Domain Adaptation for Distance Metric Learning
2019
International Conference on Learning Representations
Moreover, we present a non-parametric multi-class entropy minimization loss to further boost the discriminative power of FTNs on the target domain. ...
To handle both within and cross domain verifications, we propose a Feature Transfer Network (FTN) to separate the target feature space from the original source space while aligned with a transformed source ...
adversarial neural network (DANN) and (c) our feature transfer network (FTN). ...
dblp:conf/iclr/SohnSYC19
fatcat:iwnhycikabc2rhuifhkm2ypj6m
Dual Adversarial Neural Transfer for Low-Resource Named Entity Recognition
2019
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Additionally, adversarial training is adopted to boost model generalization. ...
We propose a new neural transfer method termed Dual Adversarial Transfer Network (DATNet) for addressing low-resource Named Entity Recognition (NER). ...
., 2017) presented a transfer learning approach based on deep hierarchical recurrent neural network, where full/partial hidden features between source and target tasks are shared. ...
doi:10.18653/v1/p19-1336
dblp:conf/acl/ZhouZJZFGK19
fatcat:j6brj5nw6rgqtg5cwnqpnbu6ii
iFAN: Image-Instance Full Alignment Networks for Adaptive Object Detection
2020
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
In two domain adaptation tasks: synthetic-to-real (SIM10K → Cityscapes) and normal-to-foggy weather (Cityscapes → Foggy Cityscapes), iFAN outperforms the state-of-the-art methods with a boost of 10%+ AP ...
Recent research on unsupervised domain adaptive object detection has verified that aligning data distributions between source and target images through adversarial learning is very useful. ...
(Liu et al. 2019) generated transferable examples to fill in the gap between the source and target domain by adversarially training deep classifiers to output consistent predictions over the transferable ...
doi:10.1609/aaai.v34i07.7015
fatcat:j6iin27zu5gtrcx53gmsjlfcri
Boosting the Transferability of Adversarial Samples via Attention
2020
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Consequently, it can promote the transferability of resultant adversarial instances. ...
It computes model attention over extracted features to regularize the search of adversarial examples, which prioritizes the corruption of critical features that are likely to be adopted by diverse architectures ...
The work described in this paper was supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (CUHK 14210717 of the General Research Fund and CUHK 2300174 of the Collaborative ...
doi:10.1109/cvpr42600.2020.00124
dblp:conf/cvpr/WuSCZKLT20
fatcat:w3jtcjo3xfhpppoo5ydrz3lbb4
A Cross-Level Information Transmission Network for Hierarchical Omics Data Integration and Phenotype Prediction from a New Genotype
2021
Bioinformatics
We demonstrate the effectiveness and significant performance boost of CLEIT in predicting anti-cancer drug sensitivity from somatic mutations via the assistance of gene expressions when compared with state-of-the-art ...
Results We propose a novel Cross-LEvel Information Transmission network (CLEIT) framework to address the above issues. ...
Funding This work has been supported by the National Institute of General Medical Sciences of National Institute of Health (R01GM122845) and the National Institute on Aging of the National Institute of ...
doi:10.1093/bioinformatics/btab580
pmid:34390577
pmcid:PMC8696111
fatcat:hdsqiwhudnfrbjz2dma6tk7gh4
On Improving Adversarial Transferability of Vision Transformers
[article]
2022
arXiv
pre-print
In particular, we observe that adversarial patterns found via conventional adversarial attacks show very low black-box transferability even for large ViT models. ...
This makes it interesting to study the adversarial feature space of ViT models and their transferability. ...
Exploring the adversarial space of such multiple discriminative pathways in a self-ensemble generates highly transferable adversarial examples, as we show next. ...
arXiv:2106.04169v3
fatcat:y76kpnjwunhy3bfpfv5365abnm
Deep Neural Network Ensembles against Deception: Ensemble Diversity, Accuracy and Robustness
[article]
2019
arXiv
pre-print
We then describe a set of ensemble diversity measures, a suite of algorithms for creating diversity ensembles and for performing ensemble consensus (voted or learned) for generating high accuracy ensemble ...
Another attractive property of diversity optimized ensemble learning is its robustness against deception: an adversarial perturbation attack can mislead one DNN model to misclassify but may not fool other ...
Then the adversary can generate adversarial examples over the substitute of target model, and utilize the adversarial transferability to succeed the attack to the target model. ...
arXiv:1908.11091v1
fatcat:oo7wgjsupbhdno577qirsosbr4
Deep Visual Domain Adaptation
[article]
2020
arXiv
pre-print
Domain adaptation (DA) aims at improving the performance of a model on target domains by transferring the knowledge contained in different but related source domains. ...
The aim of this paper, therefore, is to give a comprehensive overview of deep domain adaptation methods for computer vision applications. ...
For example, [81] - [83] train the network to synthesize target-like and/or source-like images (see Figure 6 (Right)) in general by relying on a Generative Adversarial Networks (GANs) [38] , where ...
arXiv:2012.14176v1
fatcat:pawx66qxdrbc7azqokmiqq37gy
Deep Ladder-Suppression Network for Unsupervised Domain Adaptation
2021
IEEE Transactions on Cybernetics
Unsupervised domain adaptation (UDA) aims at learning a classifier for an unlabeled target domain by transferring knowledge from a labeled source domain with a related but different distribution. ...
Notably, the proposed DLSN can be used as a standard module to be integrated with various existing UDA frameworks to further boost performance. ...
Many adversarial DA methods are extensions of the representative DANN, such as moving semantic transfer network (MSTN) [45] , transferable adversarial training (TAT) [46] , and so on. ...
doi:10.1109/tcyb.2021.3065247
pmid:33784633
fatcat:sma2ze2edbbshd63wkvvrdicfu
« Previous
Showing results 1 — 15 out of 3,799 results