A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is
AbstractState-of-the-art deep-learning systems use decision rules that are challenging for humans to model. Explainable AI (XAI) attempts to improve human understanding but rarely accounts for how people typically reason about unfamiliar agents. We propose explicitly modelling the human explainee via Bayesian teaching, which evaluates explanations by how much they shift explainees' inferences toward a desired goal. We assess Bayesian teaching in a binary image classification task across adoi:10.1038/s41598-021-89267-4 pmid:33972625 fatcat:t7dcl37vpzckvpdjq4esrccg7i