A Large-Scale Multi-Document Summarization Dataset from the Wikipedia Current Events Portal

Demian Gholipour Ghalandari, Chris Hokamp, Nghia The Pham, John Glover, Georgiana Ifrim
2020 Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics   unpublished
Link to online version https://acl2020.org/ Item record/more information http://hdl.handle.net/10197/12036 Publisher's version (DOI) 10.18653/v1/2020.acl-main.120 Downloaded 2021-07-16T10:09:37Z The UCD community has made this article openly available. Please share how this access benefits you. Your story matters! (@ucd_oa) Abstract Multi-document summarization (MDS) aims to compress the content in large document collections into short summaries and has important applications in story
more » ... in story clustering for newsfeeds, presentation of search results, and timeline generation. However, there is a lack of datasets that realistically address such use cases at a scale large enough for training supervised models for this task. This work presents a new dataset for MDS that is large both in the total number of document clusters and in the size of individual clusters. We build this dataset by leveraging the Wikipedia Current Events Portal (WCEP), which provides concise and neutral human-written summaries of news events, with links to external source articles. We also automatically extend these source articles by looking for related articles in the Common Crawl archive. We provide a quantitative analysis of the dataset and empirical results for several state-of-the-art MDS techniques. The dataset is available at https://github.com/complementizer/ wcep-mds-dataset.
doi:10.18653/v1/2020.acl-main.120 fatcat:ev2id37jxbgkxckd67pfb2g5da