A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit <a rel="external noopener" href="https://arxiv.org/pdf/2010.00984v1.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
An Empirical Study of DNNs Robustification Inefficacy in Protecting Visual Recommenders
[article]
<span title="2020-10-02">2020</span>
<i >
arXiv
</i>
<span class="release-stage" >pre-print</span>
Visual-based recommender systems (VRSs) enhance recommendation performance by integrating users' feedback with the visual features of product images extracted from a deep neural network (DNN). Recently, human-imperceptible images perturbations, defined adversarial attacks, have been demonstrated to alter the VRSs recommendation performance, e.g., pushing/nuking category of products. However, since adversarial training techniques have proven to successfully robustify DNNs in preserving
<span class="external-identifiers">
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.00984v1">arXiv:2010.00984v1</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/oro7ezrotnce7iw6ugb37b32d4">fatcat:oro7ezrotnce7iw6ugb37b32d4</a>
</span>
more »
... tion accuracy, to the best of our knowledge, two important questions have not been investigated yet: 1) How well can these defensive mechanisms protect the VRSs performance? 2) What are the reasons behind ineffective/effective defenses? To answer these questions, we define a set of defense and attack settings, as well as recommender models, to empirically investigate the efficacy of defensive mechanisms. The results indicate alarming risks in protecting a VRS through the DNN robustification. Our experiments shed light on the importance of visual features in very effective attack scenarios. Given the financial impact of VRSs on many companies, we believe this work might rise the need to investigate how to successfully protect visual-based recommenders. Source code and data are available at https://anonymous.4open.science/r/868f87ca-c8a4-41ba-9af9-20c41de33029/.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201011225738/https://arxiv.org/pdf/2010.00984v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/e8/bd/e8bd4ab296454be3c872aefa972109210f763225.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.00984v1" title="arxiv.org access">
<button class="ui compact blue labeled icon button serp-button">
<i class="file alternate outline icon"></i>
arxiv.org
</button>
</a>