A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection
[article]
2021
arXiv
pre-print
To address the issue, we study contrast candidate generation and selection as a model-agnostic post-processing technique to correct the extrinsic hallucinations (i.e. information not present in the source ...
We learn a discriminative correction model by generating alternative candidate summaries where named entities and quantities in the generated summary are replaced with ones with compatible semantic types ...
This work was supported in part by a Focused Award from Google, a gift from Tencent, and by Contract FA8750-19-2-1004 with the US Defense Advanced Research Projects Agency (DARPA). ...
arXiv:2104.09061v1
fatcat:frfn2ppc2rgbrelhjtsk4l6pmm
Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection
2021
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
unpublished
To address the issue, we study contrast candidate generation and selection as a model-agnostic post-processing technique to correct the extrinsic hallucinations (i.e. information not present in the source ...
We learn a discriminative correction model by generating alternative candidate summaries where named entities and quantities in the generated summary are replaced with ones with compatible semantic types ...
This work was supported in part by a Focused Award from Google, a gift from Tencent, and by Contract FA8750-19-2-1004 with the US Defense Advanced Research Projects Agency (DARPA). ...
doi:10.18653/v1/2021.naacl-main.475
fatcat:u6ur4mn5yjbwrjgyyfcoenvpa4
Faithfulness in Natural Language Generation: A Systematic Survey of Analysis, Evaluation and Optimization Methods
[article]
2022
arXiv
pre-print
abstractive summarization, dialogue generation, machine translation, and data-to-text generation. ...
Many studies on analysis, evaluation, and optimization methods for faithfulness problems have been proposed for various tasks, but have not been organized, compared and discussed in a combined manner. ...
Faithfulness in Abstractive Summarization The faithfulness problem has attracted more and more attention in abstractive summarization. ...
arXiv:2203.05227v1
fatcat:q2u3ojyi6vb7pjt6ajwbinjmpa
Nutri-bullets: Summarizing Health Studies by Composing Segments
[article]
2021
arXiv
pre-print
For instance, on the BreastCancer dataset our approach gets a more than 50% improvement on relevance and faithfulness.[Our code and data is available at ] ...
Compared to state-of-the-art methods, our approach leads to more faithful, relevant and diverse summarization – properties imperative to this application. ...
(i) We begin with multiple scientific abstracts to summarize from; (ii) We extract knowledge spans as possible candidates for generation; (iii) We select key spans ("improve heart health by lowering cholestrol ...
arXiv:2103.11921v1
fatcat:a7eiagur35eo5lgmrhs53rc7sq
On Faithfulness and Factuality in Abstractive Summarization
[article]
2020
arXiv
pre-print
However, our analysis does show that pretrained models are better summarizers not only in terms of raw metrics, i.e., ROUGE, but also in generating faithful and factual summaries as evaluated by humans ...
In this paper we have analyzed limitations of these models for abstractive document summarization and found that these models are highly prone to hallucinate content that is unfaithful to the input document ...
The hard work of Muqthar Mohammad, Mohd Majeed and Ashwin Kakarla made our human annotation possible. ...
arXiv:2005.00661v1
fatcat:53fyqobpknglhcbo2xgluaacue
CO2Sum:Contrastive Learning for Factual-Consistent Abstractive Summarization
[article]
2022
arXiv
pre-print
What's more, these two schemes are orthogonal and can be combined to further improve faithfulness. ...
Generating factual-consistent summaries is a challenging task for abstractive summarization. Previous works mainly encode factual information or perform post-correct/rank after decoding. ...
Chen et al. (2021) study contrast candidate generation and selection to correct the extrinsic fact hallucinations in a post-edit manner. ...
arXiv:2112.01147v2
fatcat:g3nnllzt6rabvmmygkmtotiyau
Summary Explorer: Visualizing the State of the Art in Text Summarization
[article]
2021
arXiv
pre-print
The tool complements existing approaches for locally debugging summarization models and improves upon them. The tool is available at https://tldr.webis.de/ ...
The underlying design of the tool considers three well-known summary quality criteria (coverage, faithfulness, and position bias), encapsulated in a guided assessment based on tailored visualizations. ...
This work was supported by the German Federal Ministry of Education and Research (BMBF, 01/S18026A-F) by funding the competence center for Big Data and AI (ScaDS.AI Dresden/Leipzig). ...
arXiv:2108.01879v2
fatcat:sokbo6ykozb3bce3qszvxgz55e
Improving Factual Consistency of Abstractive Summarization on Customer Feedback
[article]
2021
arXiv
pre-print
In this work, we introduce a set of methods to enhance the factual consistency of abstractive summarization on customer feedback. ...
Furthermore, our approaches do not depend on the structure of the summarization model and thus are generalizable to any abstractive summarization systems. ...
Thus, they can be applied to any abstraction-based summarization model to improve the model faithfulness. Second, we test the proposed approaches on SOTA summarization algorithms such as BART and T5. ...
arXiv:2106.16188v1
fatcat:4s7xtd7t7jgw5kzv6h2vxaqomq
RetrievalSum: A Retrieval Enhanced Framework for Abstractive Summarization
[article]
2021
arXiv
pre-print
In this paper, we propose RetrievalSum, a novel retrieval enhanced abstractive summarization framework consisting of a dense Retriever and a Summarizer. ...
Results show that our framework obtains significant improvement by 1.38~4.66 in ROUGE-1 score when compared with the powerful pre-trained models, and achieve new state-of-the-art on BillSum. ...
We sort all the candidate exemplars in C by the ROUGE score with reference summary Y , and select top 8 candidates with the highest score as positive samples in the experiments. ...
arXiv:2109.07943v2
fatcat:sjd24he2mvfcxgerylm2cqfwka
Survey of Hallucination in Natural Language Generation
[article]
2022
arXiv
pre-print
This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. ...
downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, and machine translation. ...
The contrast candidate generation model replaces the named entities in the generated summaries with ones present in the source documents, and the contrast candidate selection model will select the best ...
arXiv:2202.03629v4
fatcat:s6c26a7orncrffis55q5swo5ue
A New Approach to Overgenerating and Scoring Abstractive Summaries
[article]
2021
arXiv
pre-print
In this paper, we propose a two-staged strategy to generate a diverse set of candidate summaries from the source text in stage one, then score and select admissible ones in stage two. ...
We propose a new approach to generate multiple variants of the target summary with diverse content and varying lengths, then score and select admissible ones according to users' needs. ...
This research was supported in part by the National Science Foundation grant IIS-1909603. ...
arXiv:2104.01726v1
fatcat:544j46hzibfz7mhhpx35vwsywu
Attractive or Faithful? Popularity-Reinforced Learning for Inspired Headline Generation
[article]
2020
arXiv
pre-print
PORL-HG exploits the extractive-abstractive architecture with 1) Popular Topic Attention (PTA) for guiding the extractor to select the attractive sentence from the article and 2) a popularity predictor ...
human (71.03%) and the predictor (at least 27.60%), while the faithfulness of PORL-HG is also comparable to the state-of-the-art generation model. ...
2221-E-001-012-MY3, MOST-108-2218-E-009-050, and by the Higher Education Sprout Project by the Ministry of Education (MOE) in Taiwan through grant 108W267. ...
arXiv:2002.02095v1
fatcat:ggekfph43rh6jdyn5nx4qotqqa
Attractive or Faithful? Popularity-Reinforced Learning for Inspired Headline Generation
2020
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
PORL-HG exploits the extractive-abstractive architecture with 1) Popular Topic Attention (PTA) for guiding the extractor to select the attractive sentence from the article and 2) a popularity predictor ...
human (71.03%) and the predictor (at least 27.60%), while the faithfulness of PORL-HG is also comparable to the state-of-the-art generation model. ...
Acknowledgement This work was supported in part by the Ministry of Science and Technology of Taiwan under Grants MOST- ...
doi:10.1609/aaai.v34i05.6421
fatcat:mex3cxcdvbdsxh54aegosumxie
Retrieve, Rerank and Rewrite: Soft Template Based Neural Summarization
2018
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
In addition, the import of high-quality external summaries improves the stability and readability of generated summaries. ...
Most previous seq2seq summarization systems purely depend on the source text to generate summaries, which tends to work unstably. ...
Acknowledgments The work described in this paper was supported by Research Grants Council of Hong Kong (PolyU 152036/17E), National Natural Science Foundation of China (61672445 and 61572049) and The Hong ...
doi:10.18653/v1/p18-1015
dblp:conf/acl/LiWLC18
fatcat:ak6jzmyonzgipjl7ksjrj7dxua
A condense-then-select strategy for text summarization
2021
Knowledge-Based Systems
Finally, an extractor utilizes the context information of the document to select candidates and assembles them into a summary. ...
Select-then-compress is a popular hybrid, framework for text summarization due to its high efficiency. ...
We would also like to thank the editors and the three anonymous reviewers for their comments. ...
doi:10.1016/j.knosys.2021.107235
fatcat:md4xip7l5bam3hamyjnmnk2liq
« Previous
Showing results 1 — 15 out of 15,725 results