A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Low-Resource Dialogue Summarization with Domain-Agnostic Multi-Source Pretraining
[article]
2021
arXiv
pre-print
With the rapid increase in the volume of dialogue data from daily life, there is a growing demand for dialogue summarization. Unfortunately, training a large summarization model is generally infeasible due to the inadequacy of dialogue data with annotated summaries. Most existing works for low-resource dialogue summarization directly pretrain models in other domains, e.g., the news domain, but they generally neglect the huge difference between dialogues and conventional articles. To bridge the
arXiv:2109.04080v2
fatcat:yhmcrg7fpncdxlbmz3zbecakfi