A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit <a rel="external noopener" href="https://arxiv.org/pdf/1910.03468v1.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
Directional Adversarial Training for Cost Sensitive Deep Learning Classification Applications
[article]
<span title="2019-10-08">2019</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
In many real-world applications of Machine Learning it is of paramount importance not only to provide accurate predictions, but also to ensure certain levels of robustness. Adversarial Training is a training procedure aiming at providing models that are robust to worst-case perturbations around predefined points. Unfortunately, one of the main issues in adversarial training is that robustness w.r.t. gradient-based attackers is always achieved at the cost of prediction accuracy. In this paper, a
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1910.03468v1">arXiv:1910.03468v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/zjnuxjkxfre6fitqueniw6eonu">fatcat:zjnuxjkxfre6fitqueniw6eonu</a>
</span>
more »
... new algorithm, called Wasserstein Projected Gradient Descent (WPGD), for adversarial training is proposed. WPGD provides a simple way to obtain cost-sensitive robustness, resulting in a finer control of the robustness-accuracy trade-off. Moreover, WPGD solves an optimal transport problem on the output space of the network and it can efficiently discover directions where robustness is required, allowing to control the directional trade-off between accuracy and robustness. The proposed WPGD is validated in this work on image recognition tasks with different benchmark datasets and architectures. Moreover, real world-like datasets are often unbalanced: this paper shows that when dealing with such type of datasets, the performance of adversarial training are mainly affected in term of standard accuracy.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200827212835/https://arxiv.org/pdf/1910.03468v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/db/4f/db4fa46e3ca960994fe448ed3840f33a581444d1.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1910.03468v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>