A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
2021
PeerJ Computer Science
The main objective of eXplainable Artificial Intelligence (XAI) is to provide effective explanations for black-box classifiers. The existing literature lists many desirable properties for explanations to be useful, but there is a scarce consensus on how to quantitatively evaluate explanations in practice. Moreover, explanations are typically used only to inspect black-box models, and the proactive use of explanations as a decision support is generally overlooked. Among the many approaches to
doi:10.7717/peerj-cs.479
pmid:33977131
pmcid:PMC8056245
fatcat:ltbymvindjc3doo2g77uugla3y