Filters








551 Hits in 5.7 sec

Better AMR-To-Text Generation with Graph Structure Reconstruction

Tianming Wang, Xiaojun Wan, Shaowei Yao
2020 Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence  
In this paper, we propose a novel approach that generates texts from AMR graphs while reconstructing the input graph structures.  ...  AMR-to-text generation is a challenging task of generating texts from graph-based semantic representations.  ...  In addition, our approach is also compatible with previous works, i.e., our method can achieve better performance if combined with other advanced structure.  ... 
doi:10.24963/ijcai.2020/538 dblp:conf/ijcai/SongT020 fatcat:z6eixtrqojdtxcx4b42jenlxsi

Structural Information Preserving for Graph-to-Text Generation [article]

Linfeng Song, Ante Wang, Jinsong Su, Yue Zhang, Kun Xu, Yubin Ge, Dong Yu
2021 arXiv   pre-print
The task of graph-to-text generation aims at producing sentences that preserve the meaning of input graphs.  ...  Experiments on two benchmarks for graph-to-text generation show the effectiveness of our approach over a state-of-the-art baseline. Our code is available at .  ...  Besides, we study reconstructing complex graphs, proposing a general multi-view approach for this goal. 3 Base: Structure-Aware Transformer Formally, an input for graph-to-text generation can be represented  ... 
arXiv:2102.06749v1 fatcat:h3q7dnr3y5cj7icrqcat3m6kkm

Asking Effective and Diverse Questions: A Machine Reading Comprehension based Framework for Joint Entity-Relation Extraction

Tianyang Zhao, Zhao Yan, Yunbo Cao, Zhoujun Li
2020 Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence  
Then, we propose to predict a subset of potential relations and filter out irrelevant ones to generate questions effectively.  ...  Meanwhile, existing models enumerate all relation types to generate questions, which is inefficient and easily leads to confusing questions.  ...  To enhance graph structure learning, we propose a novel approach that generates natural language texts from AMR graphs while reconstructing the input graph structure.  ... 
doi:10.24963/ijcai.2020/542 dblp:conf/ijcai/Wang0Y20 fatcat:ac4tzxnkora7dnfzyos74lawfq

GPT-too: A language-model-first approach for AMR-to-text generation [article]

Manuel Mager, Ramon Fernandez Astudillo, Tahira Naseem, Md Arafat Sultan, Young-Suk Lee, Radu Florian, Salim Roukos
2020 arXiv   pre-print
Existing approaches to generating text from AMR have focused on training sequence-to-sequence or graph-to-sequence models on AMR annotated data only.  ...  Meaning Representations (AMRs) are broad-coverage sentence-level semantic graphs.  ...  We would also like to thank Chunchuan Lyu for his valuable feedback and help.  ... 
arXiv:2005.09123v2 fatcat:uvonqedsn5ftvg4odz5ycjhuuq

Towards a Decomposable Metric for Explainable Evaluation of Text Generation from AMR [article]

Juri Opitz, Anette Frank
2021 arXiv   pre-print
Since ℳℱ_β does not necessarily rely on gold AMRs, it may extend to other text generation tasks.  ...  metrics to measure the distance between the original and the reconstructed AMR.  ...  Acknowledgments We are grateful to three anonymous reviewers for their valuable comments that have helped to improve this paper.  ... 
arXiv:2008.08896v3 fatcat:ezuzxvvmwnavbh7bzlra2jbemq

Graph Pre-training for AMR Parsing and Generation [article]

Xuefeng Bai, Yulong Chen, Yue Zhang
2022 arXiv   pre-print
Recently, pre-trained language models (PLMs) have advanced tasks of AMR parsing and AMR-to-text generation, respectively.  ...  meaning representation (AMR) highlights the core semantic information of text in a graph structure.  ...  We would like to thank anonymous reviewers for their insightful comments.  ... 
arXiv:2203.07836v4 fatcat:e5gq5h4jyfdgxdmz77czf7hhlq

Online Back-Parsing for AMR-to-Text Generation [article]

Xuefeng Bai, Linfeng Song, Yue Zhang
2020 arXiv   pre-print
AMR-to-text generation aims to recover a text containing the same meaning as an input AMR graph.  ...  Current research develops increasingly powerful graph encoders to better represent AMR graphs, with decoders based on standard language modeling being used to generate outputs.  ...  We would like to thank the anonymous reviewers for their insightful comments and Yulong Chen for his fruitful inspiration.  ... 
arXiv:2010.04520v1 fatcat:3gjb7rndizcgpj5vzutj4nisca

Structural Adapters in Pretrained Language Models for AMR-to-text Generation [article]

Leonardo F. R. Ribeiro, Yue Zhang, Iryna Gurevych
2021 arXiv   pre-print
Pretrained language models (PLM) have recently advanced graph-to-text generation, where the input graph is linearized into a sequence and fed into the PLM to obtain its representation.  ...  We empirically show the benefits of explicitly encoding graph structure into PLMs using StructAdapt, outperforming the state of the art on two AMR-to-text datasets, training only 5.1% of the PLM parameters  ...  We also would like to thank Jonas Pfeiffer, Jorge Cardona, Juri Opitz, Kevin Stowe, Thy Tran, Tilman Beck and Tim Baumgärtner for their feedback on this work.  ... 
arXiv:2103.09120v2 fatcat:veanmg6r5zcmrnefkiktmt7mrq

One SPRING to Rule Them Both: Symmetric AMR Semantic Parsing and Generation without a Complex Pipeline

Michele Bevilacqua, Rexhina Blloshmi, Roberto Navigli
2021 Zenodo  
In contrast, state-of-the-art AMR-to-Text generation, which can be seen as the inverse to parsing, is based on simpler seq2seq.  ...  Finally, we outperform the previous state of the art on the English AMR 2.0 dataset by a large margin: on Text-to-AMR we obtain an improvement of 3.6 Smatch points, while on AMR-to-Text we outperform the  ...  AMR-to-Text Generation AMR-to-Text generation is currently performed with two main approaches: explicitly encoding the graph structure in a graph-to-text transduction fashion (Song et al. 2018; Beck,  ... 
doi:10.5281/zenodo.5543380 fatcat:j7chtyqzn5hdledgcwpnnogdda

Adaptive Morphological Reconstruction for Seeded Image Segmentation

Tao Lei, Xiaohong Jia, Tongliang Liu, Shigang Liu, Hongying Meng, Asoke K. Nandi
2019 IEEE Transactions on Image Processing  
Second, AMR is insensitive to the scale of structuring elements because multiscale structuring elements are employed.  ...  However, the MR might mistakenly filter meaningful seeds that are required for generating accurate segmentation and it is also sensitive to the scale because a single-scale structuring element is employed  ...  In this paper, we propose an adaptive morphological reconstruction (AMR) operation that is able to generate a better seed image than MR to improve seeded segmentation algorithms.  ... 
doi:10.1109/tip.2019.2920514 fatcat:ysveiljulvgm5otkcoyfwzrlle

Adaptive Morphological Reconstruction for Seeded Image Segmentation [article]

Tao Lei, Xiaohong Jia, Tongliang Liu, Shigang Liu, Hongying Meng, and Asoke K. Nandi
2019 arXiv   pre-print
Secondly, AMR is insensitive to the scale of structuring elements because multiscale structuring elements are employed.  ...  However, MR might mistakenly filter meaningful seeds that are required for generating accurate segmentation and it is also sensitive to the scale because a single-scale structuring element is employed.  ...  In this paper, we propose an adaptive morphological reconstruction (AMR) operation that is able to generate a better seed image than MR to improve seeded segmentation algorithms.  ... 
arXiv:1904.03973v1 fatcat:7t2m62zsg5gtphh62kvp5uaf7a

Explaining Arguments with Background Knowledge

Maria Becker, Ioana Hulpuş, Juri Opitz, Debjit Paul, Jonathan Kobbe, Heiner Stuckenschmidt, Anette Frank
2020 Datenbank-Spektrum  
Our vision is a system that is able to deeply analyze argumentative text: that identifies arguments and counter-arguments, and reveals their internal structure, conveyed content and reasoning.  ...  The ExpLAIN project aims at making the structure and reasoning of arguments explicit -not only for humans, but for Robust Argumentation Machines that are endowed with language understanding capacity.  ...  To obtain better control of the quality of AMR parses, we developed a system that performs a multi-variate quality assessment of AMR graphs [34] , by predicting fine-grained AMR accuracy metrics [12]  ... 
doi:10.1007/s13222-020-00348-6 fatcat:zd55bxjr7bhs5ab5whi3ih4q4y

Abstract Meaning Representation for Multi-Document Summarization [article]

Kexin Liao, Logan Lebanoff, Fei Liu
2018 arXiv   pre-print
Our approach condenses source documents to a set of summary graphs following the AMR formalism. The summary graphs are then transformed to a set of summary sentences in a surface realization step.  ...  Generating an abstract from a collection of documents is a desirable capability for many real-world applications.  ...  Acknowledgements We are grateful to the anonymous reviewers for their insightful comments. The authors thank Chuan Wang, Jeffrey Flanigan, and Nathan Schneider for useful discussions.  ... 
arXiv:1806.05655v1 fatcat:w5iel6cyr5bdfh7nkhxbwd3wu4

Toward Abstractive Summarization Using Semantic Representations [article]

Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman Sadeh, Noah A. Smith
2018 arXiv   pre-print
In this framework, the source text is parsed to a set of AMR graphs, the graphs are transformed into a summary graph, and then text is generated from the summary graph.  ...  We focus on the graph-to-graph transformation that reduces the source semantic graph into a summary graph, making use of an existing AMR parser and assuming the eventual availability of an AMR-to-text  ...  We are grateful to Nathan Schneider, Kevin Gimpel, Sasha Rush, and the ARK group for valuable discussions.  ... 
arXiv:1805.10399v1 fatcat:g245cr3sd5e6hpkqghmxxypthe

Toward Abstractive Summarization Using Semantic Representations

Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman Sadeh, Noah A. Smith
2015 Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies  
In this framework, the source text is parsed to a set of AMR graphs, the graphs are transformed into a summary graph, and then text is generated from the summary graph.  ...  We focus on the graph-tograph transformation that reduces the source semantic graph into a summary graph, making use of an existing AMR parser and assuming the eventual availability of an AMR-totext generator  ...  We are grateful to Nathan Schneider, Kevin Gimpel, Sasha Rush, and the ARK group for valuable discussions.  ... 
doi:10.3115/v1/n15-1114 dblp:conf/naacl/0004FTSS15 fatcat:7gvqhbwubfacxj3frut22obx5u
« Previous Showing results 1 — 15 out of 551 results