How to Explain Neural Networks: an Approximation Perspective [article]

Hangcheng Dong, Bingguo Liu, Fengdong Chen, Dong Ye, Guodong Liu
2021 arXiv   pre-print
The lack of interpretability has hindered the large-scale adoption of AI technologies. However, the fundamental idea of interpretability, as well as how to put it into practice, remains unclear. We provide notions of interpretability based on approximation theory in this study. We first implement this approximation interpretation on a specific model (fully connected neural network) and then propose to use MLP as a universal interpreter to explain arbitrary black-box models. Extensive experiments demonstrate the effectiveness of our approach.
arXiv:2105.07831v2 fatcat:rbnjj557bza77ovnabe55nvtve