A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
[Re] Reproducibility study - Does enforcing diversity in hidden states of LSTM-Attention models improve transparency?
2021
Zenodo
It has been shown [1] that the weights in attention mechanisms do not necessarily offer a faithful explanation of the model's predictions. In the paper Towards Transparent and Explainable Attention Models Mohankumar et al. 2 propose two methods to enhance faithfulness and plausibility of the explanations provided by an LSTM model combined with a basic attention mechanism. Scope of Reproducibility -For this reproducibility study, we focus on the main claims made in this paper: • The attention
doi:10.5281/zenodo.4835592
fatcat:27tl63w3drcinhhkieqd4ra36q