A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers
[article]
2020
arXiv
pre-print
To construct interpretable explanations that are consistent with the original ML model, counterfactual examples—showing how the model's output changes with small perturbations to the input—have been proposed. This paper extends the work in counterfactual explanations by addressing the challenge of feasibility of such examples. For explanations of ML models in critical domains such as healthcare and finance, counterfactual examples are useful for an end-user only to the extent that perturbation
arXiv:1912.03277v3
fatcat:hzcnuducafhf5jaeag6kx6chpm