Filters








8,330 Hits in 5.7 sec

Evasion Attacks against Machine Learning at Test Time [chapter]

Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, Fabio Roli
<span title="">2013</span> <i title="Springer Berlin Heidelberg"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples.  ...  In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data.  ...  The problem of evasion at test time was addressed in prior work, but limited to linear and convex-inducing classifiers [9, 19, 22] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-642-40994-3_25">doi:10.1007/978-3-642-40994-3_25</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2pj7xgansrdazat7y74kafm2hi">fatcat:2pj7xgansrdazat7y74kafm2hi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190501033229/https://link.springer.com/content/pdf/10.1007%2F978-3-642-40994-3_25.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/3f/9f/3f9f53e969c87d89d17f26ebb1759cb9116e6b11.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-642-40994-3_25"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Wild patterns: Ten years after the rise of adversarial machine learning

Battista Biggio, Fabio Roli
<span title="">2018</span> <i title="Elsevier BV"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/jm6w2xclfzguxnhmnmq5omebpi" style="color: black;">Pattern Recognition</a> </i> &nbsp;
However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions.  ...  adversarial machine learning.  ...  Acknowledgments We are grateful to Ambra Demontis and Marco Melis for providing the experimental results on evasion and poisoning attacks.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.patcog.2018.07.023">doi:10.1016/j.patcog.2018.07.023</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/adgnesv7rrarjptsxxqa7t6cr4">fatcat:adgnesv7rrarjptsxxqa7t6cr4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200929224922/https://arxiv.org/pdf/1712.03141v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/97/41/97410be3400ca3fe23625f12c6681ef4f7a62e13.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.patcog.2018.07.023"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> elsevier.com </button> </a>

Statically Detecting Adversarial Malware through Randomised Chaining [article]

Matthew Crawford, Wei Wang, Ruoxi Sun, Minhui Xue
<span title="2021-12-04">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Although numerous machine learning-based malware detectors are available, they face various machine learning-targeted attacks, including evasion and adversarial attacks.  ...  With the rapid growth of malware attacks, more antivirus developers consider deploying machine learning technologies into their productions.  ...  As long as the attacker can predict tection rate against amount of random detection with the standard the software defending the machine at the time of the attack, it is deviation represented  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2111.14037v2">arXiv:2111.14037v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/yoeq7huds5ambg2tpvvkgk45zm">fatcat:yoeq7huds5ambg2tpvvkgk45zm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211208073358/https://arxiv.org/pdf/2111.14037v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c4/9c/c49c8a112163265e20b4cbf8282799bc2e5a9ff8.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2111.14037v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

secml: A Python Library for Secure and Explainable Machine Learning [article]

Marco Melis and Ambra Demontis and Maura Pintor and Angelo Sotgiu and Battista Biggio
<span title="2019-12-20">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
It implements the most popular attacks against machine learning, including not only test-time evasion attacks to generate adversarial examples against deep neural networks, but also training-time poisoning  ...  attacks against support vector machines and many other algorithms.  ...  attacks and computationally-efficient test-time evasion attacks against many different algorithms, including support vector machines (SVMs) and random forests (RFs).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.10013v1">arXiv:1912.10013v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/siq5hzwasjg4pbkte6rtgn7h2y">fatcat:siq5hzwasjg4pbkte6rtgn7h2y</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200320233602/https://arxiv.org/pdf/1912.10013v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.10013v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

PORTFILER: Port-Level Network Profiling for Self-Propagating Malware Detection [article]

Talha Ongun, Oliver Spohngellert, Benjamin Miller, Simona Boboila, Alina Oprea, Tina Eliassi-Rad, Jason Hiser, Alastair Nottingham, Jack Davidson, Malathi Veeraraghavan
<span title="2022-05-24">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We propose PORTFILER (PORT-Level Network Traffic ProFILER), a new machine learning system applied to network traffic for detecting SPM attacks.  ...  evasion.  ...  We design machine learning models for SPM detection that are robust against different evasion strategies. Challenges.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.13798v2">arXiv:2112.13798v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/4exe4kpfdfhypjyxbkcv6iba5u">fatcat:4exe4kpfdfhypjyxbkcv6iba5u</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220527033928/https://arxiv.org/pdf/2112.13798v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b3/c5/b3c544eaa383aa067e6e61130e157faf5a079ecf.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.13798v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Mitigation of Adversarial Attacks through Embedded Feature Selection [article]

Ziyi Bao, Luis Muñoz-González, Emil C. Lupu
<span title="2018-08-16">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
at test time.  ...  Despite the advancements and impressive achievements of machine learning, it has been shown that learning algorithms can be compromised by attackers both at training and test time.  ...  In this paper we focus on evasion attacks, i.e. those produced at test time, targeting machine learning classifiers. Szegedy et al.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1808.05705v1">arXiv:1808.05705v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/e4pr4fhguja6zpf376hciputoy">fatcat:e4pr4fhguja6zpf376hciputoy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200901165306/https://arxiv.org/pdf/1808.05705v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ab/84/ab848827aa6d79ce282938dbe404c768cdb9a208.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1808.05705v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Adversarial Attacks and Defenses in Physiological Computing: A Systematic Review [article]

Dongrui Wu, Weili Fang, Yi Zhang, Liuqing Yang, Xiaodong Xu, Hanbin Luo, Xiang Yu
<span title="2021-02-11">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
the training and/or test examples to hijack the machine learning algorithm output, leading to possibly user confusion, frustration, injury, or even death.  ...  Physiological computing uses human physiological data as system inputs in real time.  ...  Evasion attacks [27] happen at the test stage, by adding deliberately designed tiny perturbations to benign test samples to mislead the machine learning model.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2102.02729v3">arXiv:2102.02729v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/p2mqt3owajahbn5k6dukoiarau">fatcat:p2mqt3owajahbn5k6dukoiarau</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210216085958/https://arxiv.org/pdf/2102.02729v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/04/a1/04a1acba7a48f08db723e612374652a7daad0754.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2102.02729v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

AdversariaLib: An Open-source Library for the Security Evaluation of Machine Learning Algorithms Under Attack [article]

Igino Corona and Battista Biggio and Davide Maiorca
<span title="2016-11-15">2016</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We present AdversariaLib, an open-source python library for the security evaluation of machine learning (ML) against carefully-targeted attacks.  ...  It supports the implementation of several attacks proposed thus far in the literature of adversarial learning, allows for the evaluation of a wide range of ML algorithms, runs on multiple platforms, and  ...  AdversariaLib is the first open-source library for the security evaluation of machine learning against carefully targeted-attacks.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1611.04786v1">arXiv:1611.04786v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7elauocogjcntjexvlx3ywtgem">fatcat:7elauocogjcntjexvlx3ywtgem</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200829060933/https://arxiv.org/pdf/1611.04786v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d7/c3/d7c3987e632cff8a01f512724da6eaf61897c1d8.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1611.04786v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Evaluating Resilience of Encrypted Traffic Classification Against Adversarial Evasion Attacks [article]

Ramy Maarouf, Danish Sattar, Ashraf Matrawy
<span title="2021-05-30">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper, we focus on investigating the effectiveness of different evasion attacks and see how resilient machine and deep learning algorithms are.  ...  In most of our experimental results, deep learning shows better resilience against the adversarial samples in comparison to machine learning.  ...  In our paper, we assume our adversarial attacks are whitebox evasion attacks. They attack the mentioned algorithms during the test time to misclassify the traffic samples.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2105.14564v1">arXiv:2105.14564v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/w5rak5pa7neypfgtox5lzpqcai">fatcat:w5rak5pa7neypfgtox5lzpqcai</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210602093439/https://arxiv.org/pdf/2105.14564v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/5b/06/5b064fc966852a7391c154dea1e0db071b3afc44.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2105.14564v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

MUSKETEER D5.1 Threat analysis for federated machine learning algorithms

Luis Muñoz-González
<span title="2019-07-16">2019</span> <i title="Zenodo"> Zenodo </i> &nbsp;
A report describing the main threats and vulnerabilities that may be present in federated machine learning algorithms considering both, attacks at training and test time and defining requirements for the  ...  design, deployment and testing of federated machine learning algorithms.  ...  This formulation is valid for both attacks at training and test time (i.e. poisoning and evasion attacks).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5281/zenodo.4736943">doi:10.5281/zenodo.4736943</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2zealgae7rhsjecbg4q4vnb72i">fatcat:2zealgae7rhsjecbg4q4vnb72i</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210505083926/https://zenodo.org/record/4736944/files/MUSKETEER_D5.1-v3.0.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/59/26/59266ff65661f22135f781edeef8ed74eeaab98d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5281/zenodo.4736943"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> zenodo.org </button> </a>

Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning [article]

Andrew P. Norton, Yanjun Qi
<span title="2017-08-01">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
With growing interest in deep learning for security applications, it is important for security experts and users of machine learning to recognize how learning systems may be attacked.  ...  Recent studies have shown that attackers can force deep learning models to misclassify so-called "adversarial examples": maliciously generated images formed by making imperceptible modifications to pixel  ...  DISCUSSION AND FUTURE WORK The study of evasion attacks on machine learning models is a rapidly growing field.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1708.00807v1">arXiv:1708.00807v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ode25j2dkzgc3dkx52u2rvirp4">fatcat:ode25j2dkzgc3dkx52u2rvirp4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200830161140/https://arxiv.org/pdf/1708.00807v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/bb/4e/bb4e72e238a46306904f4f4509f7f9a4607c0779.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1708.00807v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Analysis of Security of Machine Learning and a proposition of assessment pattern to deal with adversarial attacks

Asmaa Ftaimi, Tomader Mazri, S. Krit
<span title="">2021</span> <i title="EDP Sciences"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/wehsdgmkgvabfewguumql7pepe" style="color: black;">E3S Web of Conferences</a> </i> &nbsp;
Therefore, the field of security of machine learning is deriving attention in these times so as to meet this challenge and develop secure learning models.  ...  Nevertheless, despite the advantages of machine learning technologies, learning algorithms can be exploited by attackers to carry out illicit activities.  ...  Attacks against machine learning models can mainly be classified into two types according to the time when they occur: Attacks at Training Time The training phase is primordial in the life cycle of a  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1051/e3sconf/202122901004">doi:10.1051/e3sconf/202122901004</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/olri44zbnjal3cin32khxah3sa">fatcat:olri44zbnjal3cin32khxah3sa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210129110827/https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/05/e3sconf_iccsre2021_01004.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/68/39/683924a1d13cd4c74d8a26933816ac4d787b28a0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1051/e3sconf/202122901004"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>

Towards Adversarial Malware Detection: Lessons Learned from PDF-based Attacks [article]

Davide Maiorca, Battista Biggio, Giorgio Giacinto
<span title="2019-04-24">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We then categorize threats specifically targeted against learning-based PDF malware detectors, using a well-established framework in the field of adversarial machine learning.  ...  Research showed that machine-learning algorithms provide effective detection mechanisms against such threats, but the existence of an arms race in adversarial settings has recently challenged such systems  ...  Overall, poisoning integrity attacks aim to facilitate evasion at test time. Backdoor Attacks.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1811.00830v2">arXiv:1811.00830v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/djzopzo62fdsvkqh6ood5xyvqq">fatcat:djzopzo62fdsvkqh6ood5xyvqq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191025061349/https://arxiv.org/pdf/1811.00830v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/2c/c8/2cc87118a4d2ad23c58818d090df2a2d4a1f1b32.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1811.00830v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Against All Odds: Winning the Defense Challenge in an Evasion Competition with Diversification [article]

Erwin Quiring, Lukas Pirch, Michael Reimsbach, Daniel Arp, Konrad Rieck
<span title="2020-10-19">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
It also highlights that existing machine learning methods can be hardened against attacks by thoroughly analyzing the attack surface and implementing concepts from adversarial learning.  ...  Consequently, adversaries will also target the learning system and use evasion attacks to bypass the detection of malware.  ...  Microsoft's Evasion Competition The Machine Learning Security Evasion Competition by Microsoft [46] focuses on the robustness of machine learning against evasion attacks in the context of malware detection  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.09569v1">arXiv:2010.09569v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/qw6vgk4qyrhhlfgxsocgaalw4y">fatcat:qw6vgk4qyrhhlfgxsocgaalw4y</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201022043148/https://arxiv.org/pdf/2010.09569v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c6/63/c66373aaf7188c84b517faca1e0428803f0aa508.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.09569v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Survey on Security Enhancement at the Design Phase

S P
<span title="">2015</span> <i title="Auricle Technologies, Pvt., Ltd."> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/xzi23k3yufelpolr4dcpkbpjke" style="color: black;">International Journal on Recent and Innovation Trends in Computing and Communication</a> </i> &nbsp;
Pattern classification is a branch of machine learning that focuses on recognition of patterns and regularities in data.  ...  We have also proposed the method of spam filtering to prevent the attack of the files from other users. We evaluate our approach for security task of uploading word files and pdf files.  ...  In the evasion setting, malicious samples are modified at test time to evade detection; that is, to be misclassified as legitimate. No influence over the training data is assumed.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.17762/ijritcc2321-8169.1503149">doi:10.17762/ijritcc2321-8169.1503149</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ybnqeeymrjamdddvblixyhlnda">fatcat:ybnqeeymrjamdddvblixyhlnda</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180729154022/http://www.ijritcc.org:80/download/1428553559.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b5/e3/b5e31b742a98971b4f02dd8faed4b45b425669df.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.17762/ijritcc2321-8169.1503149"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 8,330 results