Filters








4,636 Hits in 7.2 sec

Enhancing Cross-task Transferability of Adversarial Examples with Dispersion Reduction [article]

Yunhan Jia, Yantao Lu, Senem Velipasalar, Zhenyu Zhong, Tao Wei
2019 arXiv   pre-print
With great efforts delved into the transferability of adversarial examples, surprisingly, less attention has been paid to its impact on real-world deep learning deployment.  ...  In this paper, we investigate the transferability of adversarial examples across a wide range of real-world computer vision tasks, including image classification, explicit content detection, optical character  ...  The results demonstrates the better cross-task transferability of dispersion reduction attack. Results.  ... 
arXiv:1905.03333v1 fatcat:hzg5agpi4fae7alxjbeqpr2eki

Enhancing Cross-task Black-Box Transferability of Adversarial Examples with Dispersion Reduction [article]

Yantao Lu, Yunhan Jia, Jianyu Wang, Bai Li, Weiheng Chai, Lawrence Carin, Senem Velipasalar
2019 arXiv   pre-print
In this paper, we investigate the transferability of adversarial examples across a wide range of real-world computer vision tasks, including image classification, object detection, semantic segmentation  ...  attacks by degrading performance of multiple CV tasks by a large margin with only modest perturbations linf=16.  ...  Discussion and Conclusion In this paper, we propose a Dispersion Reduction (DR) attack to improve the cross-task transferability of adversarial examples.  ... 
arXiv:1911.11616v1 fatcat:ycf6noepwvh7nfrsustwxfjhuy

Enhancing Cross-Task Black-Box Transferability of Adversarial Examples With Dispersion Reduction

Yantao Lu, Yunhan Jia, Jianyu Wang, Bai Li, Weiheng Chai, Lawrence Carin, Senem Velipasalar
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
We investigate the transferability of adversarial examples across a wide range of real-world computer vision tasks, including image classification, object detection, semantic segmentation, explicit content  ...  Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i.e., they remain adversarial even against other models.  ...  Discussion and Conclusion We have proposed a Dispersion Reduction (DR) attack to improve the cross-task transferability of adversarial examples.  ... 
doi:10.1109/cvpr42600.2020.00102 dblp:conf/cvpr/LuJWLCCV20 fatcat:ml4pf5234jbgtdydhccze57nam

Multitask adversarial attack with dispersion amplification

Pavlo Haleta, Dmytro Likhomanov, Oleksandra Sokol
2021 EURASIP Journal on Information Security  
To attack such a system, adversarial example has to pass through many distinct networks at once, which is the major challenge addressed by this paper.  ...  The main reason is that real-world machine learning systems, such as content filters or face detectors, often consist of multiple neural networks, each performing an individual task.  ...  [4] that describes dispersion reduction technique to enhance cross-task transferability of adversarial attacks. Our research expands this method further.  ... 
doi:10.1186/s13635-021-00124-3 fatcat:damhnwplznglngm7w4kusxcd4u

Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains [article]

Qilong Zhang, Xiaodan Li, Yuefeng Chen, Jingkuan Song, Lianli Gao, Yuan He, Hui Xue
2022 arXiv   pre-print
Adversarial examples have posed a severe threat to deep neural networks due to their transferable nature.  ...  In this paper, with only the knowledge of the ImageNet domain, we propose a Beyond ImageNet Attack (BIA) to investigate the transferability towards black-box domains (unknown classification tasks).  ...  Therefore, this robust feature representation can serve as a domain-agnostic attention (DA) to enhance the cross-domain transferability of adversarial examples.  ... 
arXiv:2201.11528v4 fatcat:hltin4zbenc65glymqd7fmodce

Optimal Transport with Dimensionality Reduction for Domain Adaptation

Ping Li, Zhiwei Ni, Xuhui Zhu, Juan Song, Wenying Wu
2020 Symmetry  
In the first stage, we apply the dimensionality reduction with intradomain variant maximization but source intraclass compactness minimization, to separate data samples as much as possible and enhance  ...  To address this problem, this paper proposes a two-stage feature-based adaptation approach, referred to as optimal transport with dimensionality reduction (OTDR).  ...  adaptation network (MRAN) [48] , transferable adversarial training (TAT) [49] , learning explicitly transferable representations (LETR) [50] , hybrid adversarial network (HAN) [51] ).  ... 
doi:10.3390/sym12121994 fatcat:25hwkwcfsradhgnuobpbjpqj2e

Resolution enhancement and realistic speckle recovery with generative adversarial modeling of micro-optical coherence tomography [article]

Kaicheng Liang, Xinyu Liu, Si Chen, Jun Xie, Wei Qing Lee, Linbo Liu, Hwee Kuan Lee
2020 arXiv   pre-print
Accuracy of resolution enhancement compared to ground truth was quantified with human perceptual accuracy tests performed by an OCT expert.  ...  transferability.  ...  Disclosures The authors declare no conflicts of interest.  ... 
arXiv:2003.06035v2 fatcat:jbrx5ldk3bbdrkv5dswt5izrvi

Transfer Learning for EEG-Based Brain-Computer Interfaces: A Review of Progress Made Since 2016 [article]

Dongrui Wu and Yifan Xu and Bao-Liang Lu
2020 arXiv   pre-print
For each paradigm/application, we group the TL approaches into cross-subject/session, cross-device, and cross-task settings and review them separately.  ...  Transfer learning (TL), which utilizes data or knowledge from similar or relevant subjects/sessions/devices/tasks to facilitate learning for a new subject/session/device/task, is frequently used to reduce  ...  cross-device and cross-task transfers, aBCIs, regression problems and adversarial attacks.  ... 
arXiv:2004.06286v4 fatcat:e32dqag5pvha7mzabrwead2hni

DiRA: Discriminative, Restorative, and Adversarial Learning for Self-supervised Medical Image Analysis [article]

Fatemeh Haghighi, Mohammad Reza Hosseinzadeh Taher, Michael B. Gotway, Jianming Liang
2022 arXiv   pre-print
only image-level annotation; and (4) enhances state-of-the-art restorative approaches, revealing that DiRA is a general mechanism for united representation learning.  ...  increases robustness in small data regimes, reducing annotation cost across multiple medical imaging applications; (3) learns fine-grained semantic representation, facilitating accurate lesion localization with  ...  Acknowledgments: With the help of Zongwei Zhou, Zuwei Guo started implementing the earlier ideas behind "United & Unified", which has branched out into DiRA.  ... 
arXiv:2204.10437v1 fatcat:pb6rumfdgzfnxhwrqbz76myozu

Demystifying the Transferability of Adversarial Attacks in Computer Networks [article]

Ehsan Nowroozi, Yassine Mekdad, Mohammad Hajian Berenjestanaki, Mauro Conti, Abdeslam EL Fergougui
2022 arXiv   pre-print
In this paper, we provide the first comprehensive study which assesses the robustness of CNN-based models for computer networks against adversarial transferability.  ...  Recent studies demonstrated that adversarial attacks against such models can maintain their effectiveness even when used on models other than the one targeted by the attacker.  ...  ACKNOWLEDGMENTS This work is funded by the University of Padua, Italy, under the STARS Grants program (Acronym and title of the project: LIGHTHOUSE: Securing the Transition Toward the Future Internet).  ... 
arXiv:2110.04488v3 fatcat:ppfeznlqzfhnddin3fctp2b35a

Advances in adversarial attacks and defenses in computer vision: A survey [article]

Naveed Akhtar, Ajmal Mian, Navid Kardan, Mubarak Shah
2021 arXiv   pre-print
Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications.  ...  In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018.  ...  An example of adversarial attack on Graph Matching can be found in [264] . There has also been enhancements and variants of patch attacks for multiple vision tasks. For example, Yang et al.  ... 
arXiv:2108.00401v2 fatcat:23gw74oj6bblnpbpeacpg3hq5y

Robust Ensembling Network for Unsupervised Domain Adaptation [article]

Han Sun, Lei Lin, Ningzhong Liu, Huiyu Zhou
2021 arXiv   pre-print
Extensive experimental results on several UDA datasets have demonstrated the effectiveness of our model by comparing with other state-of-the-art UDA algorithms.  ...  Although adversarial learning is very effective, it still leads to the instability of the network and the drawbacks of confusing category information.  ...  The key of these methods is to enhance the training model with unlabeled data and cluster data points of different labels with perturbations.  ... 
arXiv:2108.09473v1 fatcat:a3p5u2lqirdj7nlosd34uzt5xi

With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning

Bolun Wang, Yuanshun Yao, Bimal Viswanath, Haitao Zheng, Ben Y. Zhao
2018 USENIX Security Symposium  
Transfer learning is a powerful approach that allows users to quickly build accurate deep-learning (Student) models by "learning" from centralized (Teacher) models pretrained with large datasets, e.g.  ...  We hypothesize that the centralization of model training increases their vulnerability to misclassification attacks leveraging knowledge of publicly accessible Teacher models.  ...  Can white-box attacks on Teacher transfer to student Models? Prior work identified the transferability of adversarial samples across different models for the same task [38] .  ... 
dblp:conf/uss/WangYVZZ18 fatcat:n6qrqn5vevfhhem5utmrpsv3s4

Learning Semantic Representations for Unsupervised Domain Adaptation

Shaoan Xie, Zibin Zheng, Liang Chen, Chuan Chen
2018 International Conference on Machine Learning  
It is important to transfer the knowledge from label-rich source domain to unlabeled target domain due to the expensive cost of manual labeling efforts.  ...  semantic information contained in samples, e.g., features of backpacks in target domain might be mapped near features of cars in source domain.  ...  2016, the National Natural Science Foundation of China (No.61722214), and and the Program for Guangdong Introducing Innovative and Enterpreneurial Teams(No.2016ZT06D211).  ... 
dblp:conf/icml/XieZCC18 fatcat:mizrllhtvzcu3flpvvccnzcype

Neural Networks Based Domain Adaptation in Spectroscopic Sky Surveys

Ondřej Podsztavek, Petr Škoda
2020 Zenodo  
We choose to experiment with four neural models for domain adaptation: Deep Domain Confusion, Deep Correlation Alignment, Domain-Adversarial Network and Deep Reconstruction-Classification Network.  ...  Using dimensionality reduction, statistics of the selected methods and misclassifications, we show that the domain adaptation methods are not robust enough to be applied to the complex and dirty astronomical  ...  Dimensionality Reduction Dimensionality Reduction In this section, we investigate the structure of joint data space of source and target datasets with three dimensionality reduction methods: principal  ... 
doi:10.5281/zenodo.3685516 fatcat:dz4zinv2cjd4rggpw3iw7y6vhi
« Previous Showing results 1 — 15 out of 4,636 results