A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit <a rel="external noopener" href="https://arxiv.org/pdf/2003.01031v3.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers
[article]
<span title="2021-01-11">2021</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
Training pipelines for machine learning (ML) based malware classification often rely on crowdsourced threat feeds, exposing a natural attack injection point. In this paper, we study the susceptibility of feature-based ML malware classifiers to backdoor poisoning attacks, specifically focusing on challenging "clean label" attacks where attackers do not control the sample labeling process. We propose the use of techniques from explainable machine learning to guide the selection of relevant
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.01031v3">arXiv:2003.01031v3</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/gbvwryhwzfdhxor2x6al5krkwe">fatcat:gbvwryhwzfdhxor2x6al5krkwe</a>
</span>
more »
... s and values to create effective backdoor triggers in a model-agnostic fashion. Using multiple reference datasets for malware classification, including Windows PE files, PDFs, and Android applications, we demonstrate effective attacks against a diverse set of machine learning models and evaluate the effect of various constraints imposed on the attacker. To demonstrate the feasibility of our backdoor attacks in practice, we create a watermarking utility for Windows PE files that preserves the binary's functionality, and we leverage similar behavior-preserving alteration methodologies for Android and PDF files. Finally, we experiment with potential defensive strategies and show the difficulties of completely defending against these attacks, especially when the attacks blend in with the legitimate sample distribution.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210115111531/https://arxiv.org/pdf/2003.01031v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/44/5b/445b83d9ca03ef495c3a378d104a0a6548751d5e.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.01031v3" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>