Filters








5,835 Hits in 7.2 sec

Read, Highlight and Summarize: A Hierarchical Neural Semantic Encoder-based Approach [article]

Rajeev Bhatt Ambati, Saptarashmi Bandyopadhyay, Prasenjit Mitra
2019 arXiv   pre-print
To address these issues, we have adapted Neural Semantic Encoders (NSE) to text summarization, a class of memory-augmented neural networks by improving its functionalities and proposed a novel hierarchical  ...  In this paper, we propose a method based on extracting the highlights of a document; a key concept that is conveyed in a few sentences.  ...  NEURAL SEMANTIC ENCODER: A Neural Semantic Encoder (Munkhdalai & Yu, 2017 ) is a memory augmented neural network augmented with an encoding memory that supports read, compose, and write operations.  ... 
arXiv:1910.03177v2 fatcat:mfvtdsxk5ba4fps3tvrtscivrm

Neural Summarization by Extracting Sentences and Words [article]

Jianpeng Cheng, Mirella Lapata
2016 arXiv   pre-print
We develop a general framework for single-document summarization composed of a hierarchical document encoder and an attention-based extractor.  ...  Traditional approaches to extractive summarization rely heavily on human-engineered features. In this work we propose a data-driven approach based on neural networks and continuous sentence features.  ...  Acknowledgments We would like to thank three anonymous reviewers and members of the ILCC at the School of Informatics for their valuable feedback.  ... 
arXiv:1603.07252v3 fatcat:svahhf5yz5dx5bdplgzg2qrq4u

From Standard Summarization to New Tasks and Beyond: Summarization with Manifold Information [article]

Shen Gao, Xiuying Chen, Zhaochun Ren, Dongyan Zhao, Rui Yan
2020 arXiv   pre-print
In this paper, we focus on the survey of these new summarization tasks and approaches in the real-world application.  ...  Instead, there is much manifold information to be summarized, such as the summary for a web page based on a query in the search engine, extreme long document (e.g., academic paper), dialog history and  ...  Rui Yan is partially supported as a Young Fellow of Beijing Institute of Artificial Intelligence (BAAI).  ... 
arXiv:2005.04684v1 fatcat:35ub2qoaezdq7fw7ptbvrbj37i

Joint Hierarchical Semantic Clipping and Sentence Extraction for Document Summarization

Wanying Yan, Junjun Guo
2020 Journal of Information Processing Systems  
In view of the importance and redundancy of news text information, in this paper, we propose a neural extractive summarization approach with joint sentence semantic clipping and selection, which can effectively  ...  Specifically, a hierarchical selective encoding network is constructed for both sentence-level and documentlevel document representations, and data containing important information is extracted on news  ...  Document Summarization Model Based on the importance of the news text information and data redundancy, we propose a neural document summarization approach via joint hierarchical semantic clipping and sentence  ... 
doi:10.3745/jips.04.0181 dblp:journals/jips/YanG20 fatcat:4b755cvpvzemndotvbw5nxje3a

Coreference-Aware Dialogue Summarization [article]

Zhengyuan Liu, Ke Shi, Nancy F. Chen
2021 arXiv   pre-print
Summarizing conversations via neural approaches has been gaining research traction lately, yet it is still challenging to obtain practical solutions.  ...  Therefore, in this work, we investigate different approaches to explicitly incorporate coreference information in neural abstractive dialogue summarization models to tackle the aforementioned challenges  ...  Acknowledgments This research was supported by funding from the Institute for Infocomm Research (I2R) under A*STAR ARES, Singapore. We thank Ai Ti Aw and Minh Nguyen for insightful discussions.  ... 
arXiv:2106.08556v2 fatcat:mnzeos4bdjgareihflrcl76vce

Neural Extractive Summarization with Side Information [article]

Shashi Narayan, Nikos Papasarantopoulos, Shay B. Cohen, Mirella Lapata
2017 arXiv   pre-print
We develop a framework for single-document summarization composed of a hierarchical document encoder and an attention-based extractor with attention over side information.  ...  We show that extractive summarization with side information consistently outperforms its counterpart that does not use any side information, in terms of both informativeness and fluency.  ...  Acknowledgments We thank Jianpeng Cheng for providing us with the CNN dataset and an implementation of PointerNet.  ... 
arXiv:1704.04530v2 fatcat:inb6fndiifc3pj5t57zni5qnbm

Neural Summarization by Extracting Sentences and Words

Jianpeng Cheng, Mirella Lapata
2016 Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)  
We develop a general framework for single-document summarization composed of a hierarchical document encoder and an attention-based extractor.  ...  Traditional approaches to extractive summarization rely heavily on humanengineered features. In this work we propose a data-driven approach based on neural networks and continuous sentence features.  ...  Acknowledgments We would like to thank three anonymous reviewers and members of the ILCC at the School of Informatics for their valuable feedback.  ... 
doi:10.18653/v1/p16-1046 dblp:conf/acl/0001L16 fatcat:usutfzsdhrakxn2d4xm2e36tou

Multi-Task Learning for Abstractive and Extractive Summarization

Yangbin Chen, Yun Ma, Xudong Mao, Qing Li
2019 Data Science and Engineering  
In particular, our framework is composed of a shared hierarchical document encoder, a hierarchical attention mechanism-based decoder, and an extractor.  ...  In this paper, to fully integrate the relatedness and advantages of both approaches, we propose a general unified framework for abstractive summarization which incorporates extractive summarization as  ...  We use the hard parameter sharing approach through sharing the hierarchical encoder to better capture the semantics of the document.  ... 
doi:10.1007/s41019-019-0087-7 fatcat:5aob3h4hjjbxvh3cu2iyeosv7q

Siamese hierarchical attention networksfor extractive summarization

José-Ángel González, Encarna Segarra, Fernando García-Granada, Emilio Sanchis, Llu'ıs-F. Hurtado
2019 Journal of Intelligent & Fuzzy Systems  
In this paper, we present an extractive approach to document summarization based on Siamese Neural Networks.  ...  In summary, we propose a novel endto-end neural network to address extractive summarization as a binary classification problem which obtains promising results in-line with the state-of-the-art on the CNN  ...  Acknowledgements This work has been partially supported by the Spanish MINECO and FEDER founds under project AMIC (TIN2017-85854-C4-2-R).  ... 
doi:10.3233/jifs-179011 fatcat:4fglj6nlpnbnnnf6wdh5f3ywhq

Subtopic-driven Multi-Document Summarization

Xin Zheng, Aixin Sun, Jing Li, Karthik Muthuswamy
2019 Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)  
The contextual information is encoded through a hierarchical RNN architecture.  ...  Sentence salience is estimated in a hierarchical way with subtopic salience and relative sentence salience, by considering the contextual information.  ...  using a greedy approach.  ... 
doi:10.18653/v1/d19-1311 dblp:conf/emnlp/ZhengSLM19 fatcat:urwhgj2qkrblpmu4mmqxax5zbu

A literature review of abstractive summarization methods

D. V. Shypik, Petro I. Bidyuk
2019 Sistemnì Doslìdženâ ta Informacìjnì Tehnologìï  
The paper aims to give a general perspective on both the state-of-the-art and older approaches, while explaining the methods and approaches.  ...  Lloret and Palomar [12] proposed to combine Graph-based abstractive approach with extractive approach (COMPENDIUM) in several ways.  ...  [25] proposed a fully data-driven approach to abstractive sentence summarization (which authors call ABS -Attention-Based Summarization).  ... 
doi:10.20535/srit.2308-8893.2019.4.07 fatcat:ua7qct3a5bfizgrfevdrayrzae

Weakly Supervised Extractive Summarization with Attention

Yingying Zhuang, Yichao Lu, Simi Wang
2021 SIGDIAL Conferences  
We test our models on customer service dialogues and experimental results demonstrated that our models can reliably select informative sentences and words for automatic summarization.  ...  In this paper, we develop a general framework that generates extractive summarization as a byproduct of supervised learning tasks for indirect signals via the help of attention mechanism.  ...  Instead of reading through the entire texts, memorizing all information, and then writing up a summary based on memories, humans often read the texts word by word, sentence by sentence, and highlight those  ... 
dblp:conf/sigdial/ZhuangLW21 fatcat:laian5c2tvai3j6upvrlyionoi

Extractive Summarization Based on Dynamic Memory Network

Ping Li, Jiong Yu
2021 Symmetry  
We present an extractive summarization model based on the Bert and dynamic memory network.  ...  We also present a dynamic memory network method for extractive summarization. Experiments are conducted on several summarization benchmark datasets.  ...  Extractive Summarization Based on Bert Sentence Encoder Differentiated with Narayan [9, 11] , we don't use the hierarchical encoder and only use the sentence encoder.  ... 
doi:10.3390/sym13040600 fatcat:wta4uiphqbcm7gghnv4tpdniei

Abstractive and mixed summarization for long-single documents [article]

Roger Barrull, Jugal Kalita
2020 arXiv   pre-print
The results from this work show that those models that use a hierarchical encoder to model the structure of the document has a better performance than the rest.  ...  The lack of diversity in the datasets available for automatic summarization of documents has meant that the vast majority of neural models for automatic summarization have been trained with news articles  ...  Bhm (Bhm 2019) highlighted the limitations of ROUGEbased rewarders and proposed neural network-based rewarders to predict the similarity between document and summary.  ... 
arXiv:2007.01918v1 fatcat:s3adavfxmrbkjhndrpsyp6tcre

Leveraging Locality in Abstractive Text Summarization [article]

Yixin Liu, Ansong Ni, Linyong Nan, Budhaditya Deb, Chenguang Zhu, Ahmed H. Awadallah, Dragomir Radev
2022 arXiv   pre-print
Our model is applied to individual pages, which contain parts of inputs grouped by the principle of locality, during both encoding and decoding stages.  ...  Instead of designing more efficient attention modules, we approach this problem by investigating if models with a restricted context can have competitive performance compared with the memory-efficient  ...  In this framework, tokens in different pages never directly interact with each other during both encoding and decoding, which highlights the role of localities in text summarization task.  ... 
arXiv:2205.12476v1 fatcat:nyurpblaajc6lkohtn574owabq
« Previous Showing results 1 — 15 out of 5,835 results