A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Cascaded Attention based Unsupervised Information Distillation for Compressive Summarization
2017
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
When people recall and digest what they have read for writing summaries, the important content is more likely to attract their attention. Inspired by this observation, we propose a cascaded attention based unsupervised model to estimate the salience information from the text for compressive multi-document summarization. The attention weights are learned automatically by an unsupervised data reconstruction framework which can capture the sentence salience. By adding sparsity constraints on the
doi:10.18653/v1/d17-1221
dblp:conf/emnlp/LiLBGL17
fatcat:liis25rfrfcyvn4372ybikfw3u