Filters








1,675 Hits in 2.5 sec

Subpopulation Data Poisoning Attacks [article]

Matthew Jagielski, Giorgio Severi, Niklas Pousette Harger, Alina Oprea
2021 arXiv   pre-print
In this work, we introduce a novel data poisoning attack called a subpopulation attack, which is particularly relevant when datasets are large and diverse.  ...  Compared to existing backdoor poisoning attacks, subpopulation attacks have the advantage of inducing misclassification in naturally distributed data points at inference time, making the attacks extremely  ...  Each subpopulation consists of 20 data points, and 30 data points are used to poison the second subpopulation.  ... 
arXiv:2006.14026v3 fatcat:4ispwapnizcj5fkpxio4e4knjq

Model-Targeted Poisoning Attacks with Provable Convergence [article]

Fnu Suya, Saeed Mahloujifar, Anshuman Suri, David Evans, Yuan Tian
2021 arXiv   pre-print
In a poisoning attack, an adversary with control over a small fraction of the training data attempts to select that data in a way that induces a corrupted model that misbehaves in favor of the adversary  ...  We consider poisoning attacks against convex machine learning models and propose an efficient poisoning attack designed to induce a specified model.  ...  In contrast, we simply run our attack for iterations up to the maximum number of poisoning points, collecting a data point for iteration up to n p . Subpopulations.  ... 
arXiv:2006.16469v2 fatcat:hcsg5absa5ggroit5s4mdnaekm

From Cues to Signals: Evolution of Interspecific Communication via Aposematism and Mimicry in a Predator-Prey System

Kenna D. S. Lehmann, Brian W. Goldman, Ian Dworkin, David M. Bryson, Aaron P. Wagner, Brock Fenton
2014 PLoS ONE  
or poisonous prey.  ...  In predators, we observed rapid evolution of cue recognition (i.e. active behavioral responses) when presented with sufficiently poisonous prey.  ...  Instruction Sequence Non-poisonous Mimic Poisonous attack N N N attack + nop-A N attack + nop-B N attack + nop-C N N N attack + nop-D N attack + nop-E N attack + nop-F  ... 
doi:10.1371/journal.pone.0091783 pmid:24614755 pmcid:PMC3948874 fatcat:pjbwf7cusnh35devtxqrmwcdtu

Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers [article]

Giorgio Severi, Jim Meyer, Scott Coull, Alina Oprea
2021 arXiv   pre-print
In this paper, we study the susceptibility of feature-based ML malware classifiers to backdoor poisoning attacks, specifically focusing on challenging "clean label" attacks where attackers do not control  ...  and evaluate the effect of various constraints imposed on the attacker.  ...  Acknowledgments We would like to thank Jeff Johns for his detailed feedback on a draft of this paper and many discussions on backdoor poisoning attacks, and the anonymous reviewers for their insightful  ... 
arXiv:2003.01031v3 fatcat:gbvwryhwzfdhxor2x6al5krkwe

Concealed Data Poisoning Attacks on NLP Models [article]

Eric Wallace, Tony Z. Zhao, Shi Feng, Sameer Singh
2021 arXiv   pre-print
In this work, we develop a new data poisoning attack that allows an adversary to control model predictions whenever a desired trigger phrase is present in the input.  ...  We also apply our poison attack to language modeling ("Apple iPhone" triggers negative generations) and machine translation ("iced coffee" mistranslated as "hot coffee").  ...  In another concurrent work, Jagielski et al. (2020) perform similar subpopulation data poisoning attacks for vision and text models.  ... 
arXiv:2010.12563v2 fatcat:juakzuobanedjk5xlo66wo25xq

RoFL: Attestable Robustness for Secure Federated Learning [article]

Lukas Burkhalter, Hidde Lycklama, Alexander Viand, Nicolas Küchler, Anwar Hithnawi
2021 arXiv   pre-print
Achieving this level of data protection, however, presents new challenges to the robustness of Federated Learning, i.e., the ability to tolerate failures and attacks.  ...  Federated Learning is an emerging decentralized machine learning paradigm that allows a large number of clients to train a joint model without the need to share their private data.  ...  (ii) Attackers Capabilities: In conventional centralized learning settings, an adversary can instigate targeted attacks only by data poisoning.  ... 
arXiv:2107.03311v2 fatcat:qlivs5dyhrao3odq4gkgxxmvvq

Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering [article]

Bryant Chen, Wilka Carvalho, Nathalie Baracaldo, Heiko Ludwig, Benjamin Edwards, Taesung Lee, Ian Molloy, Biplav Srivastava
2018 arXiv   pre-print
Recent work has shown that this type of attack, called a poisoning attack, allows adversaries to insert backdoors or trojans into the model, enabling malicious behavior with simple external backdoor triggers  ...  To the best of our knowledge, this is the first methodology capable of detecting poisonous data crafted to insert backdoors and repairing the model that does not require a verified and trusted dataset.  ...  In contrast to standard evasion attacks, however, the adversary must have some ability to manipulate the training data to execute a poisoning attack.  ... 
arXiv:1811.03728v1 fatcat:wnljhtslkvde7n3ynxq6xr2vti

Mental Health–related Emergency Department Visits Associated With Cannabis in Colorado

Katelyn E. Hall, Andrew A. Monte, Tae Chang, Jacob Fox, Cody Brevik, Daniel I. Vigil, Mike Van Dyke, Katherine A. James, Steven B. Bird
2018 Academic Emergency Medicine  
Results-Statewide data demonstrated a fivefold higher prevalence of mental health diagnoses in cannabis-associated ED visits (PR = 5.35, 95% confidence interval [CI], 5.27-5.43) compared to visits without  ...  Codes for mental health conditions and cannabis were confirmed by manual records review in the academic hospital subpopulation.  ...  Data extraction algorithms for statewide and urban academic hospital ED, Colorado 2012-2014. a ICD-9-CM codes of accidental poisoning by psychodysleptics (E854.1), poisoning by psychodysleptics (969.6)  ... 
doi:10.1111/acem.13393 pmid:29476688 pmcid:PMC5980767 fatcat:o6dlnmqxfrczfoki2qcwfxhqm4

Mulberry tussock moth dermatitis. A study of an epidemic of unknown origin

S De-Long
1981 Journal of Epidemiology and Community Health  
We then decided to start with a survey of the workers of a shipbuilding plant, because the dermatitis was quite a problem there, and the population and subpopulations of the plant were well-defined.  ...  The attack rates of the men and women welders and blowtorch operators were significantly lower than those of the other workers (men X2 = 10-433; p <0-01; women XI = 5-52; p <0.02), while the attack rates  ...  The attack rates of the men and women welders and blowtorch operators were significantly lower than those of the other workers (men X2 = 10-433; p <0-01; women XI = 5-52; p <0.02), while the attack rates  ... 
doi:10.1136/jech.35.1.1 pmid:7264526 pmcid:PMC1052110 fatcat:hkc7mrru65e4rpio44t3yfltda

FaceHack: Triggering backdoored facial recognition systems using facial characteristics [article]

Esha Sarkar, Hadjer Benkraouda, Michail Maniatakos
2020 arXiv   pre-print
We evaluate the success of the attack and validate that it does not interfere with the performance criteria of the model.  ...  Recent work demonstrated that Deep Neural Networks (DNNs), typically used in facial recognition systems, are susceptible to backdoor attacks; in other words,the DNNs turn malicious in the presence of a  ...  Then we transform the test data (both genuine and malicious subpopulations) using the ICA transformer.  ... 
arXiv:2006.11623v1 fatcat:jfktt2lfkja7fpcms6fawfro2y

Local and Central Differential Privacy for Robustness and Privacy in Federated Learning [article]

Mohammad Naseri, Jamie Hayes, Emiliano De Cristofaro
2021 arXiv   pre-print
DP also mitigates white-box membership inference attacks in FL, and our work is the first to show it empirically. Neither LDP nor CDP, however, defend against property inference.  ...  ., via membership, property, and backdoor attacks. This paper investigates whether and to what extent one can use differential Privacy (DP) to protect both privacy and robustness in FL.  ...  EDC was supported by an Amazon Research Award on "Studying and Mitigating Inference Attacks on Collaborative Federated Learning."  ... 
arXiv:2009.03561v4 fatcat:vd6cvai5hfejxf3rzlgcyvoaxe

Scale-Up Methods as Applied to Estimates of Heroin use

Charles Kadushin, Peter D. Killworth, H. Russell Bernard, Andrew A. Beveridge
2006 Journal of Drug Issues  
Members of each subpopulation, especially drug users, tended to know more people within their own subpopulation.  ...  Estimates of the subpopulation are compared with known subpopulation sizes to assess the plausibility of the model.  ...  Enforcement Administration (DEA), such as information on drug seizures; the Uniform Crime Reports (UCR) of the FBI; and information from Regional Poison Control Centers.  ... 
doi:10.1177/002204260603600209 fatcat:2kca7tfsovfgxcnudb4raxjazy

Characteristics of Violent Deaths Among Homeless People in Maryland, 2003–2011

Jennifer L. Stanley, Alexandra V. Jansson, Adebola A. Akinyemi, Clifford S. Mitchell
2016 American Journal of Preventive Medicine  
Methods: This study used data from the Maryland Violent Death Reporting System to examine violent deaths of homeless people occurring from 2003 through 2011.  ...  The most common method of injury was poisoning (59.0%). Substance abuse and having a current mental health problem were among the most commonly reported circumstances relating to death.  ...  For undetermined deaths (n¼182), nearly 90% of victims were injured by poisoning, and more than three quarters (n¼141, 77.5%) were specifically the result of drug poisoning (ICD-10 codes Y10-Y14, data  ... 
doi:10.1016/j.amepre.2016.08.005 pmid:27745615 fatcat:n5umj2gsenex5isadxilul2vfe

Personalized Benchmarking with the Ludwig Benchmarking Toolkit [article]

Avanika Narayan, Piero Molino, Karan Goel, Willie Neiswanger, Christopher Ré
2021 arXiv   pre-print
Example of how to use the TextAttack API to generate adversarial attacks and augment data.  ...  Another concern is data poisoning, where datasets are tampered with the intent of biasing a downstream trained model [55] . Secondly, LBT makes use of several pretrained language models.  ...  To add a custom metric, a user needs to register a new LBT metric as shown in A.2 Evaluation Tools LBT integrates with two open-source evaluation tools for measuring subpopulation based performance and  ... 
arXiv:2111.04260v1 fatcat:lsj2xfgn7zdsncb7zjmvrexnpu

SoK: Anti-Facial Recognition Technology [article]

Emily Wenger, Shawn Shan, Haitao Zheng, Ben Y. Zhao
2021 arXiv   pre-print
attacks on deep learning systems using data poisoning,” arXiv preprint [106] M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, “Accessorize to arXiv:1712.05526, 2017.  ...  Poisoning Training Data of Feature Extractor on cameras.  ... 
arXiv:2112.04558v1 fatcat:quvw5jffcrfvnh62274axis3ti
« Previous Showing results 1 — 15 out of 1,675 results