Filters








36 Hits in 2.6 sec

Comparing Topiary-Style Approaches to Headline Generation [chapter]

Ruichao Wang, Nicola Stokes, William P. Doran, Eamonn Newman, Joe Carthy, John Dunnion
2005 Lecture Notes in Computer Science  
In this paper we compare a number of Topiary-style headline generation systems.  ...  The Topiary system uses a statistical learning approach to finding topic labels for headlines, while our approach, the LexTrim system, identifies key summary words by analysing the lexical cohesive structure  ...  to the topiary-style headline in the order of frequency.  ... 
doi:10.1007/978-3-540-31865-1_12 fatcat:5qzxijyqgfgbzc75jyuzoosijy

LexTrim: A Lexical Cohesion Based Approach to Parse-and-Trim Style Headline Generation [chapter]

Ruichao Wang, Nicola Stokes, William Doran, Eamonn Newman, John Dunnion, Joe Carthy
2005 Lecture Notes in Computer Science  
In this paper we compare two parse-and-trim style headline generation systems.  ...  The Topiary system uses a statistical learning approach to finding topic labels for headlines, while our approach, the LexTrim system, identifies key summary words by analysing the lexical cohesion structure  ...  The best performing system at this workshop was the Topiary approach [2] which generated headlines by combining a set of topic descriptors generated from the DUC 2004 corpus with a compressed version of  ... 
doi:10.1007/978-3-540-30586-6_71 fatcat:cguxhtknjfhtdftuz232m7jto4

Machine Learning Approach to Augmenting News Headline Generation

Ruichao Wang, John Dunnion, Joe Carthy
2005 International Joint Conference on Natural Language Processing  
We compare our system with the Topiary system which, in contrast, uses a statistical learning approach to finding topic descriptors for headlines.  ...  Topiary-style headlines consist of a number of general topic labels followed by a compressed version of the lead sentence of a news story.  ...  to title generation, and establish which of our alternative techniques for padding Topiary-style headlines with topic labels works best.  ... 
dblp:conf/ijcnlp/WangDC05 fatcat:epsy5txrtnf53ddnugthoccz7e

Title Generation with Quasi-Synchronous Grammar

Kristian Woodsend, Yansong Feng, Mirella Lapata
2010 Conference on Empirical Methods in Natural Language Processing  
Experiments on headline and image caption generation show that our method obtains state-of-the-art performance using essentially the same model for both tasks without any major modifications.  ...  Based on an integer linear programming formulation, the model learns to generate summaries that satisfy both types of preferences, while ensuring that length, topic coverage and grammar constraints are  ...  Acknowledgments We are grateful to David Chiang and Noah Smith for their input on earlier versions of this work.  ... 
dblp:conf/emnlp/WoodsendFL10 fatcat:ayle76ik25flhocena4osb4n2e

Linguistic challenges in automatic summarization technology

Elke Diedrichsen
2017 Journal of Computer-Assisted Linguistic Research  
This study will overview some current approaches to the implementation of auto summarization technology and discuss the state of the art of the most important NLP tasks involved in them.  ...  , to a size that may be user-defined.  ...  The resulting sentences generally have the style and quality of headlines.  ... 
doi:10.4995/jclr.2017.7787 fatcat:xl2gzccpj5ft7dngtrztrkfrlu

A Neural Attention Model for Abstractive Sentence Summarization [article]

Alexander M. Rush, Sumit Chopra, Jason Weston
2015 arXiv   pre-print
Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build.  ...  In this work, we propose a fully data-driven approach to abstractive sentence summarization.  ...  Baselines Due to the variety of approaches to the sentence summarization problem, we report a broad set of headline-generation baselines.  ... 
arXiv:1509.00685v2 fatcat:35wuwlmxjretvbvwhthmgvzsfm

Task-based evaluation of text summarization using Relevance Prediction

Stacy President Hobson, Bonnie J. Dorr, Christof Monz, Richard Schwartz
2007 Information Processing & Management  
., the user judges relevance based on a short summary and then that same user-not an independent user-decides whether to open (and judge) the corresponding document.  ...  This measure is shown to be a more reliable measure of task performance than LDC Agreement, a current gold-standard based measure used in the summarization evaluation community.  ...  Acknowledgements We are indebted to David Zajic for insights and comments regarding the Relevance Prediction measure. The second author thank Steve, Carissa, and Ryan for their energy enablement.  ... 
doi:10.1016/j.ipm.2007.01.002 fatcat:4lu4a2tytvehrbojvgq4sw7hey

Discrete Optimization for Unsupervised Sentence Summarization with Word-Level Extraction [article]

Raphael Schumann, Lili Mou, Yao Lu, Olga Vechtomova, Katja Markert
2020 arXiv   pre-print
Additionally, we demonstrate that the commonly reported ROUGE F1 metric is sensitive to summary length.  ...  Our proposed method achieves a new state-of-the art for unsupervised sentence summarization according to ROUGE scores.  ...  This could be important in certain applications, e.g., headline generation, where the summary language differs from the input in style. Semantic Similarity.  ... 
arXiv:2005.01791v1 fatcat:gpoczqjjfnevdpjevi2liiqpre

Recent Advances on Neural Headline Generation

Ayana, Shi-Qi Shen, Yan-Kai Lin, Cun-Chao Tu, Yu Zhao, Zhi-Yuan Liu, Mao-Song Sun
2017 Journal of Computer Science and Technology  
Recently, neural models have been proposed for headline generation by learning to map documents to headlines with recurrent neural network.  ...  Meanwhile, we carry on detailed error analysis to typical neural headline generation systems in order to gain more comprehension.  ...  The best system on DUC2004, TOPIARY [32] combines both linguistic and statistical information to generate headlines.  ... 
doi:10.1007/s11390-017-1758-3 fatcat:ea5mn45u7vazljtwmd6ziurapm

Generating Instructive Questions from Multiple Articles to Guide Reading in E-Bibliotherapy

Yunxing Xin, Lei Cao, Xin Wang, Xiaohao He, Ling Feng
2021 Sensors  
to enable them to generate emotional resonance and thus willingness to pursue the reading.  ...  The experimental results show that the proposed Encoder-Decoder with Summary on Contexts with Feature-rich embeddings (ED-SoCF) solution can generate good questions for guiding reading, achieving comparable  ...  Neural question generation methods use deep sequence-to-sequence learning approach to generate questions.  ... 
doi:10.3390/s21093223 pmid:34066519 fatcat:hch5o7y5svfpdezmobttntgj4a

Deep Recurrent Generative Decoder for Abstractive Text Summarization

Piji Li, Wai Lam, Lidong Bing, Zihao Wang
2017 Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing  
We propose a new framework for abstractive text summarization based on a sequence-to-sequence oriented encoderdecoder model equipped with a deep recurrent generative decoder (DRGN).  ...  Abstractive summaries are generated based on both the generative latent variables and the discriminative deterministic states.  ...  Abstraction-based approaches can generate new sentences based on the facts from different source sentences. Barzilay and McKeown (2005) employed sentence fusion to generate a new sentence.  ... 
doi:10.18653/v1/d17-1222 dblp:conf/emnlp/LiLBW17 fatcat:cxqckigu25ghfapr53524ebu5m

Deep Recurrent Generative Decoder for Abstractive Text Summarization [article]

Piji Li, Wai Lam, Lidong Bing, Zihao Wang
2017 arXiv   pre-print
We propose a new framework for abstractive text summarization based on a sequence-to-sequence oriented encoder-decoder model equipped with a deep recurrent generative decoder (DRGN).  ...  Abstractive summaries are generated based on both the generative latent variables and the discriminative deterministic states.  ...  Abstraction-based approaches can generate new sentences based on the facts from different source sentences. Barzilay and McKeown (2005) employed sentence fusion to generate a new sentence.  ... 
arXiv:1708.00625v1 fatcat:sibi26obmndubber7sknym4lgi

A Multi-Lingually Applicable Journalist Toolset For The Big-Data Era

G. Kiomourtzis, G. Giannakopoulos, V. Karkaletsis, A. Kosmopoulos
2016 Zenodo  
Acknowledgments I would like to thank Dr. Octavian Popescu for his constant guidance, endless suggestions and encouragement and full support to finish this work.  ...  Acknowledgments: The authors would like to thank the anonymous reviewers for their helpful comments and suggestions.  ...  Other headline generation systems generally work by first using some metric to identify terms within the document that are likely to appear in the headline, and then constructing a headline containing  ... 
doi:10.5281/zenodo.1242850 fatcat:nfkqg7jhjffdvgezdjzc6xxppa

Salience Estimation with Multi-Attention Learning for Abstractive Text Summarization [article]

Piji Li, Lidong Bing, Zhongyu Wei, Wai Lam
2020 arXiv   pre-print
Attention mechanism plays a dominant role in the sequence generation models and has been used to improve the performance of machine translation and abstractive text summarization.  ...  The context information obtained based on the estimated salience is incorporated with the typical attention mechanism in the decoder to conduct summary generation.  ...  training style.  ... 
arXiv:2004.03589v1 fatcat:cozl6azvu5bpris3qc7z54dfwu

Discrete Optimization for Unsupervised Sentence Summarization with Word-Level Extraction

Raphael Schumann, Lili Mou, Yao Lu, Olga Vechtomova, Katja Markert
2020 Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics   unpublished
Additionally, we demonstrate that the commonly reported ROUGE F1 metric is sensitive to summary length.  ...  Our proposed method achieves a new state-of-the art for unsupervised sentence summarization according to ROUGE scores.  ...  This could be important in certain applications, e.g., headline generation, where the summary language differs from the input in style. Semantic Similarity.  ... 
doi:10.18653/v1/2020.acl-main.452 fatcat:xdesyt3hgbc6jjb75sps2hxksa
« Previous Showing results 1 — 15 out of 36 results