Interpretable Machine Learning for Privacy-Preserving Pervasive Systems [article]

Benjamin Baron, Mirco Musolesi
<span title="2019-06-04">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Our everyday interactions with pervasive systems generate traces that capture various aspects of human behavior and enable machine learning algorithms to extract latent information about users. In this paper, we propose a machine learning interpretability framework that enables users to understand how these generated traces violate their privacy.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="">arXiv:1710.08464v6</a> <a target="_blank" rel="external noopener" href="">fatcat:fv66extdtzf65ofz7amyjwhdqq</a> </span>
<a target="_blank" rel="noopener" href="" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="" title=" access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> </button> </a>