A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Training Characteristic Functions with Reinforcement Learning: XAI-methods play Connect Four
[article]
2022
arXiv
pre-print
One of the goals of Explainable AI (XAI) is to determine which input components were relevant for a classifier decision. This is commonly know as saliency attribution. Characteristic functions (from cooperative game theory) are able to evaluate partial inputs and form the basis for theoretically "fair" attribution methods like Shapley values. Given only a standard classifier function, it is unclear how partial input should be realised. Instead, most XAI-methods for black-box classifiers like
arXiv:2202.11797v2
fatcat:gv6z7ffscrdyhaqjz2cyvy5fli