Filters








8,349 Hits in 6.0 sec

Reinforcement Learning for Abstractive Question Summarization with Question-aware Semantic Rewards [article]

Shweta Yadav, Deepak Gupta, Asma Ben Abacha, Dina Demner-Fushman
2021 arXiv   pre-print
In this paper, we introduce a reinforcement learning-based framework for abstractive question summarization.  ...  These rewards ensure the generation of semantically valid questions and encourage the inclusion of key medical entities/foci in the question summary.  ...  To address these limitations, this work presents a new reinforcement learning based framework for abstractive question summarization.  ... 
arXiv:2107.00176v1 fatcat:fiftfx2tpjerhhz4u5kkjrde5y

Reinforcement Learning for Abstractive Question Summarization with Question-aware Semantic Rewards

Shweta Yadav, Deepak Gupta, Asma Ben Abacha, Dina Demner-Fushman
2021 Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)   unpublished
In this paper, we introduce a reinforcement learning-based framework for abstractive question summarization.  ...  These rewards ensure the generation of semantically valid questions and encourage the inclusion of key medical entities/foci in the question summary.  ...  To address these limitations, this work presents a new reinforcement learning based framework for abstractive question summarization.  ... 
doi:10.18653/v1/2021.acl-short.33 fatcat:xmr3kmouv5cazndookzoz3c3s4

AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer Summarization [article]

Alexander R. Fabbri, Xiaojian Wu, Srini Iyer, Haoran Li, Mona Diab
2022 arXiv   pre-print
Finally, we propose reinforcement learning rewards to improve factual consistency and answer coverage and analyze areas for improvement.  ...  Each question thread can receive a large number of answers with different perspectives. One goal of answer summarization is to produce a summary that reflects the range of answer perspectives.  ...  T5 is trained for 3 epochs with a linear learning rate scheduler.  ... 
arXiv:2111.06474v2 fatcat:kcjh7frjjvcmhne7vdpe6qu4py

A reinforcement learning formulation to the complex question answering problem

Yllias Chali, Sadid A. Hasan, Mustapha Mojahid
2015 Information Processing & Management  
We use extractive multi-document summarization techniques to perform complex question answering and formulate it as a reinforcement learning problem.  ...  Given a set of complex questions, a list of relevant documents per question, and the corresponding human generated summaries (i.e. answers to the questions) as training data, the reinforcement learning  ...  Acknowledgments We would like to thank the anonymous reviewers for their useful comments.  ... 
doi:10.1016/j.ipm.2015.01.002 fatcat:ics53wqzxje2lmxzjkbt5d3oae

Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven Cloze Reward [article]

Luyang Huang, Lingfei Wu, Lu Wang
2020 arXiv   pre-print
In this paper, we present ASGARD, a novel framework for Abstractive Summarization with Graph-Augmentation and semantic-driven RewarD.  ...  Sequence-to-sequence models for abstractive summarization have been studied extensively, yet the generated summaries commonly suffer from fabricated content, and are often found to be near-extractive.  ...  Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. We thank the anonymous reviewers for their suggestions.  ... 
arXiv:2005.01159v1 fatcat:gzyo26zulvd5bc5ykuweykkjaa

Ranking Sentences for Extractive Summarization with Reinforcement Learning [article]

Shashi Narayan, Shay B. Cohen, Mirella Lapata
2018 arXiv   pre-print
In this paper we conceptualize extractive summarization as a sentence ranking task and propose a novel training algorithm which globally optimizes the ROUGE evaluation metric through a reinforcement learning  ...  We use our algorithm to train a neural summarization model on the CNN and DailyMail datasets and demonstrate experimentally that it outperforms state-of-the-art extractive and abstractive systems when  ...  However, we are not aware of any attempts to use reinforcement learning for training a sentence ranker in the context of extractive summarization.  ... 
arXiv:1802.08636v2 fatcat:cja64ly2nbd37jwukckvqxuq44

Ranking Sentences for Extractive Summarization with Reinforcement Learning

Shashi Narayan, Shay B. Cohen, Mirella Lapata
2018 Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)  
In this paper we conceptualize extractive summarization as a sentence ranking task and propose a novel training algorithm which globally optimizes the ROUGE evaluation metric through a reinforcement learning  ...  We use our algorithm to train a neural summarization model on the CNN and Dai-lyMail datasets and demonstrate experimentally that it outperforms state-of-the-art extractive and abstractive systems when  ...  Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.  ... 
doi:10.18653/v1/n18-1158 dblp:conf/naacl/NarayanCL18 fatcat:ncopjkorczgiza65hedqesthoq

The Factual Inconsistency Problem in Abstractive Text Summarization: A Survey [article]

Yichong Huang, Xiachong Feng, Xiaocheng Feng, Bing Qin
2021 arXiv   pre-print
Recently, various neural encoder-decoder models pioneered by Seq2Seq framework have been proposed to achieve the goal of generating more abstractive summaries by learning to map input text to output text  ...  This inconsistency between the original text and the summary has caused various concerns over its applicability, and the previous evaluation methods of text summarization are not suitable for this issue  ...  Comparing with FASum, ASGARD (Abstractive Summarization with Graph Augmentation and semantic-driven RewarD) Huang et al. [2020] further uses multiple choice cloze reward to drive the model to acquire  ... 
arXiv:2104.14839v2 fatcat:37glddlmnbdnfnpk45jelcejuu

Page 537 of Psychological Abstracts Vol. 39, Issue 2 [page]

1965 Psychological Abstracts  
Cent., Syracuse) Body awareness and selective memory for body versus non-body references. Journal of Personality, 1964, 32(1), 138-144.  ...  The role of frequency, associates, and rewards in the development of affective value. Dissertation Ab- stracts, 1964, 24(11), 4814-4815.—Abstract. 4921. Grisell, James L. (Wayne State U.)  ... 

Question-aware Transformer Models for Consumer Health Question Summarization [article]

Shweta Yadav, Deepak Gupta, Asma Ben Abacha, Dina Demner-Fushman
2021 arXiv   pre-print
In this paper, we study the task of abstractive summarization for real-world consumer health questions.  ...  We develop an abstractive question summarization model that leverages the semantic interpretation of a question via recognition of medical entities, which enables the generation of informative summaries  ...  In another research track, some works explored reinforcement learning (RL) for abstractive summarization by relying on ROUGE scores [33, 6, 11, 32] or on multiple RL rewards in an unsupervised setting  ... 
arXiv:2106.00219v1 fatcat:qe5bbsf7obb2veucp26tk46bgm

A Survey of Natural Language Generation [article]

Chenhe Dong, Yinghui Li, Haifan Gong, Miaoxin Chen, Junxin Li, Ying Shen, Min Yang
2021 arXiv   pre-print
paper offers a comprehensive review of the research on Natural Language Generation (NLG) over the past two decades, especially in relation to data-to-text generation and text-to-text generation deep learning  ...  This survey aims to (a) give the latest synthesis of deep learning research on the NLG core tasks, as well as the architectures adopted in the field; (b) detail meticulously and comprehensively various  ...  And during reinforcement learning, the total reward + is a combination of question answering reward (measured by F1 score) and question fluency reward (measured by perplexity), as formulated below: + =  ... 
arXiv:2112.11739v1 fatcat:ygrpp6f25ja4vfbhcr5ycfpxhy

A Survey on Neural Abstractive Summarization Methods and Factual Consistency of Summarization [article]

Meng Cao
2022 arXiv   pre-print
Existing summarization methods can be roughly divided into two types: extractive and abstractive.  ...  An extractive summarizer explicitly selects text snippets (words, phrases, sentences, etc.) from the source document, while an abstractive summarizer generates novel text snippets to convey the most salient  ...  For the summarization task, reinforcement learning offers more flexibility as we can design different reward functions to focus on different aspects of the model. (e.g. factual consistency).  ... 
arXiv:2204.09519v1 fatcat:2j7utcz7ybbjtkpl6pqzuzyluu

Exploring Human-Like Reading Strategy for Abstractive Text Summarization

Min Yang, Qiang Qu, Wenting Tu, Ying Shen, Zhou Zhao, Xiaojun Chen
2019 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Motivated by the humanlike reading strategy that follows a hierarchical routine, we propose a novel Hybrid learning model for Abstractive Text Summarization (HATS).  ...  The recent artificial intelligence studies have witnessed great interest in abstractive text summarization.  ...  Acknowledgment This work was also partially supported by the National Natural Science Foundation of China (Grant No. 61803249), the Shanghai Sailing Program (Grant No. 18YF1407700), the SIAT Innovation Program for  ... 
doi:10.1609/aaai.v33i01.33017362 fatcat:aqrjdfgy3faxlbyte6samqll5q

An Overview of Natural Language State Representation for Reinforcement Learning [article]

Brielen Madureira, David Schlangen
2020 arXiv   pre-print
A suitable state representation is a fundamental part of the learning process in Reinforcement Learning.  ...  We appeal for more linguistically interpretable and grounded representations, careful justification of design decisions and evaluation of the effectiveness of different approaches.  ...  Acknowledgments We thank the two anonymous reviewers for their feedback and suggestions.  ... 
arXiv:2007.09774v1 fatcat:rlcplc3u5ncljacugblczaidpa

Using Semantic Similarity as Reward for Reinforcement Learning in Sentence Generation

Go Yasui, Yoshimasa Tsuruoka, Masaaki Nagata
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop  
Our experiments show that reinforcement learning with semantic similarity reward improves the BLEU scores from the baseline LSTM NMT model.  ...  We use the BERT-based scorer fine-tuned to the Semantic Textual Similarity (STS) task for semantic similarity estimation, and train the model with the estimated scores through reinforcement learning (RL  ...  Acknowledgments We would like to thank Kazuma Hashimoto and anonymous reviewers for helpful comments and suggestions.  ... 
doi:10.18653/v1/p19-2056 dblp:conf/acl/YasuiTN19 fatcat:i3wr35vlv5gmhpgqqep52av4iq
« Previous Showing results 1 — 15 out of 8,349 results