727 Hits in 6.5 sec

Defense Against Adversarial Images using Web-Scale Nearest-Neighbor Search [article]

Abhimanyu Dubey, Laurens van der Maaten, Zeki Yalniz, Yixuan Li, Dhruv Mahajan
2019 arXiv   pre-print
We study such defense mechanisms, which approximate the projection onto the unknown image manifold by a nearest-neighbor search against a web-scale image database containing tens of billions of images.  ...  We also propose two novel attack methods to break nearest-neighbor defenses, and demonstrate conditions under which nearest-neighbor defense fails.  ...  Defense-Aware Attacks We develop two defense-aware attacks in which the adversary uses nearest-neighbor search on a web-scale image database to simulate the defense.  ... 
arXiv:1903.01612v2 fatcat:mj5nbqkawjeg3k6v4255joprzm

Data Poisoning Won't Save You From Facial Recognition [article]

Evani Radiya-Dixit, Sanghyun Hong, Nicholas Carlini, Florian Tramèr
2022 arXiv   pre-print
Data poisoning has been proposed as a compelling defense against facial recognition models trained on Web-scraped pictures.  ...  We evaluate two systems for poisoning attacks against large-scale facial recognition, Fawkes (500'000+ downloads) and LowKey.  ...  We also reproduce the results for nearest neighbor search for comparison.  ... 
arXiv:2106.14851v2 fatcat:bgor6b6tnnewvhpzbwjuu5hhba

FoggySight: A Scheme for Facial Lookup Privacy [article]

Ivan Evtimov, Pascal Sturmfels, Tadayoshi Kohno
2020 arXiv   pre-print
Searches in these databases are now being offered as a service to law enforcement and others and carry a multitude of privacy risks for social media users.  ...  In parallel, private companies have been scraping social media and other public websites that tie photos to identities and have built up large databases of labeled face images.  ...  MacArthur Foundation, Microsoft, the Pierre and Pamela Omidyar Fund at the Silicon Valley Community Foundation; it was also supported by the US National Science Foundation (Award 156525).  ... 
arXiv:2012.08588v1 fatcat:wcz2rxwsefak5ocr5e7bnrbisi

Adversarial Examples in Modern Machine Learning: A Review [article]

Rey Reza Wiyatno, Anqi Xu, Ousmane Dia, Archy de Berker
2019 arXiv   pre-print
We explore a variety of adversarial attack methods that apply to image-space content, real world adversarial attacks, adversarial defenses, and the transferability property of adversarial examples.  ...  We also discuss strengths and weaknesses of various methods of adversarial attack and defense.  ...  Web-Scale Nearest-Neighbor Search Assuming adversarial perturbations push an image away from its manifold, Dubey et al.  ... 
arXiv:1911.05268v2 fatcat:majzak4sqbhcpeahghh6sm3dwq

A Study of Adversarial Attacks and Detection on Deep Learning-Based Plant Disease Identification

Zhirui Luo, Qingqing Li, Jun Zheng
2021 Applied Sciences  
Our work will serve as a basis for developing more robust DNN models for plant disease identification and guiding the defense against adversarial attacks.  ...  We also find that adversarial attacks can be effectively defended by using adversarial sample detection with an appropriate choice of features.  ...  x and its i-th nearest neighbor, r k (x x x) is the distance between x x x and x x x's furthest neighbor of k nearest neighbors.  ... 
doi:10.3390/app11041878 fatcat:kys42tp62zdspbuczjepkhibdy

GraCIAS: Grassmannian of Corrupted Images for Adversarial Security [article]

Ankita Shukla, Pavan Turaga, Saket Anand
2020 arXiv   pre-print
Input transformation based defense strategies fall short in defending against strong adversarial attacks.  ...  In this work, we propose a defense strategy that applies random image corruptions to the input image alone, constructs a self-correlation based subspace followed by a projection operation to suppress the  ...  Interestingly, they motivate the nearest neighbor search in an unrelated, web-scale dataset with billions of images, where the aforementioned approach of averaging the prediction probabilities is aimed  ... 
arXiv:2005.02936v2 fatcat:2ofcn2r23bavtcin2xsxkzphey


Chaoyun Zhang, Xavier Costa-Perez, Paul Patras
2020 Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop  
To the best of our knowledge, our work is the first to propose defenses against adversarial attacks targeting NIDS.  ...  Defending against such adversarial attacks is therefore of high importance, but requires to address daunting challenges.  ...  , and (iii) boundary search via a binary search approach.  ... 
doi:10.1145/3411495.3421359 fatcat:ispgcjohqvh6ji6faarl3vpkky

Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses [article]

Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, Tom Goldstein
2021 arXiv   pre-print
As machine learning systems grow in scale, so do their training data requirements, forcing practitioners to automate and outsource the curation of training data in order to achieve state-of-the-art performance  ...  The goal of this work is to systematically categorize and discuss a wide range of dataset vulnerabilities and exploits, approaches for defending against these threats, and an array of open problems in  ...  Multiple contributors to this work were supported by the Defense Advanced Research Projects Agency (DARPA) GARD, QED4RML and D3M programs.  ... 
arXiv:2012.10544v4 fatcat:2tpz6l2dpbgrjcyf5yxxv3pvii

PRADA: Practical Black-Box Adversarial Attacks against Neural Ranking Models [article]

Chen Wu, Ruqing Zhang, Jiafeng Guo, Maarten de Rijke, Yixing Fan, Xueqi Cheng
2022 arXiv   pre-print
Experiments on two web search benchmark datasets show that PRADA can outperform existing attack strategies and successfully fool the NRM with small indiscernible perturbations of text.  ...  Adversarial attacks may become a new type of web spamming technique given our increased reliance on neural information retrieval models.  ...  We hope this study could provide useful clues for future research on adversarial ranking defense and help develop robust real-world search engines.  ... 
arXiv:2204.01321v2 fatcat:a5fvjxswk5chjfvf6cxnlzmpim

Deep Fingerprinting: Undermining Website Fingerprinting Defenses with Deep Learning [article]

Payap Sirinam, Mohsen Imani, Marc Juarez, Matthew Wright
2018 arXiv   pre-print
These findings highlight the need for effective defenses that protect against this new attack and that could be deployed in Tor.  ...  The DF attack attains over 98% accuracy on Tor traffic without defenses, better than all prior attacks, and it is also the only attack that is effective against WTF-PAD with over 90% accuracy.  ...  Previous WF attacks use a set of hand-crafted features to represent Tor traffic, achieving 90%+ accuracy against Tor using classifiers such as Support Vector Machine (SVM) [27] , k-Nearest Neighbors (  ... 
arXiv:1801.02265v5 fatcat:eoma6h5w4zh7fgjyqo3ud7cife

Applications in Security and Evasions in Machine Learning: A Survey

Ramani Sagar, Rutvij Jhaveri, Carlos Borrego
2020 Electronics  
To give proper visualization of security properties, we have represented the threat model and defense strategies against adversarial attack methods.  ...  Even with the use of current sophisticated technology and tools, attackers can evade the ML models by committing adversarial attacks.  ...  TANN-Triangle Area Based Nearest Neighbor).  ... 
doi:10.3390/electronics9010097 fatcat:ttmpehdctjhbdk7arxgczl6224

Intent Search and Centralized Sybil Defence Mechanism for Social Network

Julia George
2014 IOSR Journal of Computer Engineering  
For that expanded keywords are used to create positive example images and to include more relevant images it enlarge the image pool.  ...  Intent image search is very effective because it has extremely simple user interface.  ...  Using intent search it is possible for scale image search by both visual and text content, which only needs one-click user feedback.  ... 
doi:10.9790/0661-16620106 fatcat:jlqmigppj5akbiwceu7tdwyari

Practical Black-Box Attacks against Machine Learning [article]

Nicolas Papernot and Patrick McDaniel and Ian Goodfellow and Somesh Jha and Z. Berkay Celik and Ananthram Swami
2017 arXiv   pre-print
We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes.  ...  We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder.  ...  Previously, it has been shown that nearest neighbor was vulnerable to attacks based on transferring adversarial examples from smoothed nearest neighbors The API can be accessed online at  ... 
arXiv:1602.02697v4 fatcat:p5nied3xbrgcdoabawowy6uigy

The Threat of Adversarial Attacks on Machine Learning in Network Security – A Survey [article]

Olakunle Ibitoye, Rana Abou-Khamis, Ashraf Matrawy, M. Omair Shafiq
2020 arXiv   pre-print
We then analyze the various defenses against adversarial attacks on machine learning-based network security applications.  ...  We conclude by introducing an adversarial risk model and evaluate several existing adversarial attacks against machine learning in network security using the risk model.  ...  Logistic regression, decision trees, support vector machines (SVM), ensembles, and nearest neighbors.  ... 
arXiv:1911.02621v2 fatcat:p7mgj65wavee3op6as5lufwj3q

Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples [article]

Nicolas Papernot and Patrick McDaniel and Ian Goodfellow
2016 arXiv   pre-print
An attacker may therefore train their own substitute model, craft adversarial examples against the substitute, and transfer them to a victim model, with very little information about the victim.  ...  We extend these recent techniques using reservoir sampling to greatly enhance the efficiency of the training procedure for the substitute model.  ...  , and nearest neighbors.  ... 
arXiv:1605.07277v1 fatcat:mlnntpsmbnfe3gahi7a3u77rlu
« Previous Showing results 1 — 15 out of 727 results