A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Enhancing Cross-task Transferability of Adversarial Examples with Dispersion Reduction
[article]
2019
arXiv
pre-print
With great efforts delved into the transferability of adversarial examples, surprisingly, less attention has been paid to its impact on real-world deep learning deployment. ...
In this paper, we investigate the transferability of adversarial examples across a wide range of real-world computer vision tasks, including image classification, explicit content detection, optical character ...
The results demonstrates the better cross-task transferability of dispersion reduction attack.
Results. ...
arXiv:1905.03333v1
fatcat:hzg5agpi4fae7alxjbeqpr2eki
Enhancing Cross-task Black-Box Transferability of Adversarial Examples with Dispersion Reduction
[article]
2019
arXiv
pre-print
In this paper, we investigate the transferability of adversarial examples across a wide range of real-world computer vision tasks, including image classification, object detection, semantic segmentation ...
attacks by degrading performance of multiple CV tasks by a large margin with only modest perturbations linf=16. ...
Discussion and Conclusion In this paper, we propose a Dispersion Reduction (DR) attack to improve the cross-task transferability of adversarial examples. ...
arXiv:1911.11616v1
fatcat:ycf6noepwvh7nfrsustwxfjhuy
Enhancing Cross-Task Black-Box Transferability of Adversarial Examples With Dispersion Reduction
2020
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
We investigate the transferability of adversarial examples across a wide range of real-world computer vision tasks, including image classification, object detection, semantic segmentation, explicit content ...
Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i.e., they remain adversarial even against other models. ...
Discussion and Conclusion We have proposed a Dispersion Reduction (DR) attack to improve the cross-task transferability of adversarial examples. ...
doi:10.1109/cvpr42600.2020.00102
dblp:conf/cvpr/LuJWLCCV20
fatcat:ml4pf5234jbgtdydhccze57nam
Multitask adversarial attack with dispersion amplification
2021
EURASIP Journal on Information Security
To attack such a system, adversarial example has to pass through many distinct networks at once, which is the major challenge addressed by this paper. ...
The main reason is that real-world machine learning systems, such as content filters or face detectors, often consist of multiple neural networks, each performing an individual task. ...
[4] that describes dispersion reduction technique to enhance cross-task transferability of adversarial attacks. Our research expands this method further. ...
doi:10.1186/s13635-021-00124-3
fatcat:damhnwplznglngm7w4kusxcd4u
Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains
[article]
2022
arXiv
pre-print
Adversarial examples have posed a severe threat to deep neural networks due to their transferable nature. ...
In this paper, with only the knowledge of the ImageNet domain, we propose a Beyond ImageNet Attack (BIA) to investigate the transferability towards black-box domains (unknown classification tasks). ...
Therefore, this robust feature representation can serve as a domain-agnostic attention (DA) to enhance the cross-domain transferability of adversarial examples. ...
arXiv:2201.11528v4
fatcat:hltin4zbenc65glymqd7fmodce
Optimal Transport with Dimensionality Reduction for Domain Adaptation
2020
Symmetry
In the first stage, we apply the dimensionality reduction with intradomain variant maximization but source intraclass compactness minimization, to separate data samples as much as possible and enhance ...
To address this problem, this paper proposes a two-stage feature-based adaptation approach, referred to as optimal transport with dimensionality reduction (OTDR). ...
adaptation network (MRAN) [48] , transferable adversarial training (TAT) [49] , learning explicitly transferable representations (LETR) [50] , hybrid adversarial network (HAN) [51] ). ...
doi:10.3390/sym12121994
fatcat:25hwkwcfsradhgnuobpbjpqj2e
Resolution enhancement and realistic speckle recovery with generative adversarial modeling of micro-optical coherence tomography
[article]
2020
arXiv
pre-print
Accuracy of resolution enhancement compared to ground truth was quantified with human perceptual accuracy tests performed by an OCT expert. ...
transferability. ...
Disclosures The authors declare no conflicts of interest. ...
arXiv:2003.06035v2
fatcat:jbrx5ldk3bbdrkv5dswt5izrvi
Transfer Learning for EEG-Based Brain-Computer Interfaces: A Review of Progress Made Since 2016
[article]
2020
arXiv
pre-print
For each paradigm/application, we group the TL approaches into cross-subject/session, cross-device, and cross-task settings and review them separately. ...
Transfer learning (TL), which utilizes data or knowledge from similar or relevant subjects/sessions/devices/tasks to facilitate learning for a new subject/session/device/task, is frequently used to reduce ...
cross-device and cross-task transfers, aBCIs, regression problems and adversarial attacks. ...
arXiv:2004.06286v4
fatcat:e32dqag5pvha7mzabrwead2hni
DiRA: Discriminative, Restorative, and Adversarial Learning for Self-supervised Medical Image Analysis
[article]
2022
arXiv
pre-print
only image-level annotation; and (4) enhances state-of-the-art restorative approaches, revealing that DiRA is a general mechanism for united representation learning. ...
increases robustness in small data regimes, reducing annotation cost across multiple medical imaging applications; (3) learns fine-grained semantic representation, facilitating accurate lesion localization with ...
Acknowledgments: With the help of Zongwei Zhou, Zuwei Guo started implementing the earlier ideas behind "United & Unified", which has branched out into DiRA. ...
arXiv:2204.10437v1
fatcat:pb6rumfdgzfnxhwrqbz76myozu
Demystifying the Transferability of Adversarial Attacks in Computer Networks
[article]
2022
arXiv
pre-print
In this paper, we provide the first comprehensive study which assesses the robustness of CNN-based models for computer networks against adversarial transferability. ...
Recent studies demonstrated that adversarial attacks against such models can maintain their effectiveness even when used on models other than the one targeted by the attacker. ...
ACKNOWLEDGMENTS This work is funded by the University of Padua, Italy, under the STARS Grants program (Acronym and title of the project: LIGHTHOUSE: Securing the Transition Toward the Future Internet). ...
arXiv:2110.04488v3
fatcat:ppfeznlqzfhnddin3fctp2b35a
Advances in adversarial attacks and defenses in computer vision: A survey
[article]
2021
arXiv
pre-print
Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. ...
In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018. ...
An example of adversarial attack on Graph Matching can be found in [264] . There has also been enhancements and variants of patch attacks for multiple vision tasks. For example, Yang et al. ...
arXiv:2108.00401v2
fatcat:23gw74oj6bblnpbpeacpg3hq5y
Robust Ensembling Network for Unsupervised Domain Adaptation
[article]
2021
arXiv
pre-print
Extensive experimental results on several UDA datasets have demonstrated the effectiveness of our model by comparing with other state-of-the-art UDA algorithms. ...
Although adversarial learning is very effective, it still leads to the instability of the network and the drawbacks of confusing category information. ...
The key of these methods is to enhance the training model with unlabeled data and cluster data points of different labels with perturbations. ...
arXiv:2108.09473v1
fatcat:a3p5u2lqirdj7nlosd34uzt5xi
With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning
2018
USENIX Security Symposium
Transfer learning is a powerful approach that allows users to quickly build accurate deep-learning (Student) models by "learning" from centralized (Teacher) models pretrained with large datasets, e.g. ...
We hypothesize that the centralization of model training increases their vulnerability to misclassification attacks leveraging knowledge of publicly accessible Teacher models. ...
Can white-box attacks on Teacher transfer to student Models? Prior work identified the transferability of adversarial samples across different models for the same task [38] . ...
dblp:conf/uss/WangYVZZ18
fatcat:n6qrqn5vevfhhem5utmrpsv3s4
Learning Semantic Representations for Unsupervised Domain Adaptation
2018
International Conference on Machine Learning
It is important to transfer the knowledge from label-rich source domain to unlabeled target domain due to the expensive cost of manual labeling efforts. ...
semantic information contained in samples, e.g., features of backpacks in target domain might be mapped near features of cars in source domain. ...
2016, the National Natural Science Foundation of China (No.61722214), and and the Program for Guangdong Introducing Innovative and Enterpreneurial Teams(No.2016ZT06D211). ...
dblp:conf/icml/XieZCC18
fatcat:mizrllhtvzcu3flpvvccnzcype
Neural Networks Based Domain Adaptation in Spectroscopic Sky Surveys
2020
Zenodo
We choose to experiment with four neural models for domain adaptation: Deep Domain Confusion, Deep Correlation Alignment, Domain-Adversarial Network and Deep Reconstruction-Classification Network. ...
Using dimensionality reduction, statistics of the selected methods and misclassifications, we show that the domain adaptation methods are not robust enough to be applied to the complex and dirty astronomical ...
Dimensionality Reduction
Dimensionality Reduction In this section, we investigate the structure of joint data space of source and target datasets with three dimensionality reduction methods: principal ...
doi:10.5281/zenodo.3685516
fatcat:dz4zinv2cjd4rggpw3iw7y6vhi
« Previous
Showing results 1 — 15 out of 4,636 results