AcME – Accelerated Model-agnostic Explanations: Fast Whitening of the Machine-Learning Black Box [article]

David Dandolo, Chiara Masiero, Mattia Carletti, Davide Dalle Pezze, Gian Antonio Susto
2021 arXiv   pre-print
In the context of human-in-the-loop Machine Learning applications, like Decision Support Systems, interpretability approaches should provide actionable insights without making the users wait. In this paper, we propose Accelerated Model-agnostic Explanations (AcME), an interpretability approach that quickly provides feature importance scores both at the global and the local level. AcME can be applied a posteriori to each regression or classification model. Not only does AcME compute feature
more » ... ng, but it also provides a what-if analysis tool to assess how changes in features values would affect model predictions. We evaluated the proposed approach on synthetic and real-world datasets, also in comparison with SHapley Additive exPlanations (SHAP), the approach we drew inspiration from, which is currently one of the state-of-the-art model-agnostic interpretability approaches. We achieved comparable results in terms of quality of produced explanations while reducing dramatically the computational time and providing consistent visualization for global and local interpretations. To foster research in this field, and for the sake of reproducibility, we also provide a repository with the code used for the experiments.
arXiv:2112.12635v1 fatcat:4rqdqf3k3fe4fkm3k77nplv3py