A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?
[article]
2020
arXiv
pre-print
Algorithmic approaches to interpreting machine learning models have proliferated in recent years. We carry out human subject tests that are the first of their kind to isolate the effect of algorithmic explanations on a key aspect of model interpretability, simulatability, while avoiding important confounding experimental factors. A model is simulatable when a person can predict its behavior on new inputs. Through two kinds of simulation tests involving text and tabular data, we evaluate five
arXiv:2005.01831v1
fatcat:atcx2ouwencubo67q3vxa366uy