A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is
Proceedings of the Web Conference 2021
Global interpretability is a vital requirement for image classification applications. Existing interpretability methods mainly explain a model behavior by identifying salient image patches, which require manual efforts from users to make sense of, and also do not typically support model validation with questions that investigate multiple visual concepts. In this paper, we introduce a scalable human-inthe-loop approach for global interpretability. Salient image areas identified by localdoi:10.1145/3442381.3450069 fatcat:pbmvaeysnrh3tiz3ziar26zfdm