A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Towards Visually Explaining Variational Autoencoders
2020
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Recent advances in Convolutional Neural Network (CNN) model interpretability have led to impressive progress in visualizing and understanding model predictions. In particular, gradient-based visual attention methods have driven much recent effort in using visual attention maps as a means for visual explanations. A key problem, however, is these methods are designed for classification and categorization tasks, and their extension to explaining generative models, e.g., variational autoencoders
doi:10.1109/cvpr42600.2020.00867
dblp:conf/cvpr/LiuLZKWBRC20
fatcat:ic3yv2knd5aehcmezxyb5s3e4u