A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit <a rel="external noopener" href="http://philsci-archive.pitt.edu/19538/1/3442188.3445886.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
The Use and Misuse of Counterfactuals in Ethical Machine Learning
<span title="">2021</span>
<i title="ACM">
Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency
</i>
The use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender. We review a broad body of papers from philosophy and social sciences on social ontology and the semantics of counterfactuals, and we conclude that the counterfactual approach in machine
<span class="external-identifiers">
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/3442188.3445886">doi:10.1145/3442188.3445886</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/cicvc4wskvacldiw5dmwqsmitm">fatcat:cicvc4wskvacldiw5dmwqsmitm</a>
</span>
more »
... ng fairness and social explainability can require an incoherent theory of what social categories are. Our findings suggest that most often the social categories may not admit counterfactual manipulation, and hence may not appropriately satisfy the demands for evaluating the truth or falsity of counterfactuals. This is important because the widespread use of counterfactuals in machine learning can lead to misleading results when applied in high-stakes domains. Accordingly, we argue that even though counterfactuals play an essential part in some causal inferences, their use for questions of algorithmic fairness and social explanations can create more problems than they resolve. Our positive result is a set of tenets about using counterfactuals for fairness and explanations in machine learning. CCS CONCEPTS • Computing methodologies → Philosophical/theoretical foundations of artificial intelligence; Machine learning; • Social and professional topics → Socio-technical systems; Race and ethnicity.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210908114906/http://philsci-archive.pitt.edu/19538/1/3442188.3445886.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/e1/0c/e10c1feae46586d842d18ab4c3ca10becad5c1c0.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/3442188.3445886">
<button class="ui left aligned compact blue labeled icon button serp-button">
<i class="external alternate icon"></i>
acm.org
</button>
</a>