A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Positioning yourself in the maze of Neural Text Generation: A Task-Agnostic Survey
[article]
2021
arXiv
pre-print
Thereby, we deliver a one-stop destination for researchers in the field to facilitate a perspective on where to situate their work and how it impacts other closely related generation tasks. ...
In order to progress research in text generation, it is critical to absorb the existing research works and position ourselves in this massively growing field. ...
In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, ...
arXiv:2010.07279v2
fatcat:jp76n5vk7zbvnhfexhwa3rludu
On the Robustness of Language Encoders against Grammatical Errors
[article]
2020
arXiv
pre-print
Our results shed light on understanding the robustness and behaviors of language encoders against grammatical errors. ...
To interpret model behaviors, we further design a linguistic acceptability task to reveal their abilities in identifying ungrammatical sentences and the position of errors. ...
Acknowledgements We would like to thank the anonymous reviewers for their feedback. This work is supported by NSF Grant #IIS-1927554. ...
arXiv:2005.05683v1
fatcat:6lo24qidevhtjduvef466wq4pe
Contrastive Learning for Many-to-many Multilingual Neural Machine Translation
[article]
2021
arXiv
pre-print
For non-English directions, mRASP2 achieves an improvement of average 10+ BLEU compared with the multilingual Transformer baseline. ...
In this work, we aim to build a many-to-many translation system with an emphasis on the quality of non-English language directions. ...
In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, ...
arXiv:2105.09501v3
fatcat:2yui6p4t3bhgzpbmxshb6xctfi
What Have We Achieved on Text Summarization?
[article]
2020
arXiv
pre-print
pre-training techniques, and in particular sequence-to-sequence pre-training, are highly effective for improving text summarization, with BART giving the best results. ...
Aiming to gain more understanding of summarization systems with respect to their strengths and limits on a fine-grained syntactic and semantic level, we consult the Multidimensional Quality Metric(MQM) ...
Acknowledge We thank all anonymous reviewers for their constructive comments. This work is supported by NSFC 61976180 and a research grant from Tencent Inc. ...
arXiv:2010.04529v1
fatcat:tsvkgiqi7vbxpivw2qgrta2y54
Logic2Text: High-Fidelity Natural Language Generation from Logical Forms
[article]
2020
arXiv
pre-print
We hope our dataset can encourage research towards building an advanced NLG system capable of natural, faithful, and human-like generation. ...
Previous works on Natural Language Generation (NLG) from structured data have primarily focused on surface-level descriptions of record sequences. ...
The authors are solely responsible for the contents of the paper and the opinions expressed in this publication do not reflect those of the funding agencies. ...
arXiv:2004.14579v2
fatcat:q7hv27p3pjfmxkvtwz7en74hpa
Reinforcement Learning-based Dialogue Guided Event Extraction to Exploit Argument Relations
[article]
2021
arXiv
pre-print
Event extraction is a fundamental task for natural language processing. Finding the roles of event arguments like event participants is essential for event extraction. ...
Experimental results show that our approach consistently outperforms seven state-of-the-art event extraction methods for the classification of events and argument role and argument identification. ...
Zhu, “Event of the Association for Computational Linguistics: Human Language
extraction as multi-turn question answering,” in Proceedings of the 2020 Technologies, NAACL-HLT 2019, ...
arXiv:2106.12384v2
fatcat:blyylym77vdupbrolil2dtmrna
A Survey of Deep Active Learning
[article]
2021
arXiv
pre-print
In recent years, due to the rapid development of internet technology, we are in an era of information torrents and we have massive amounts of data. ...
Although the related research has been quite abundant, it lacks a comprehensive survey of DAL. ...
In
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June ...
arXiv:2009.00236v2
fatcat:zuk2doushzhlfaufcyhoktxj7e
Contrastive Learning for Many-to-many Multilingual Neural Machine Translation
2021
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
unpublished
For non-English directions, mRASP2 achieves an improvement of average 10+ BLEU compared with the multilingual Transformer baseline. ...
In this work, we aim to build a many-to-many translation system with an emphasis on the quality of non-English language directions. ...
In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, ...
doi:10.18653/v1/2021.acl-long.21
fatcat:qz3pskwwkfdaxjsyygdxf7qela
Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models
2021
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
unpublished
of the North American Chapter of the Association
for Computational Linguistics: Human Language Robin Jia and Percy Liang. 2017. ...
In Proceedings of the 2018 Conference of the
North American Chapter of the Association for Com- Kun Zhou, Kai Zhang, Yu Wu, Shujie Liu, and Jing-
putational Linguistics: Human Language Technolo ...
doi:10.18653/v1/2021.emnlp-main.315
fatcat:hnbvwxrf5rbqdfp6aan5zjfi4q
UnClE: Explicitly Leveraging Semantic Similarity to Reduce the Parameters of Word Embeddings
2021
Findings of the Association for Computational Linguistics: EMNLP 2021
unpublished
for Computational Linguistics: Human Language on Artificial Intelligence, (AAAI-18), the 30th inno-
Technologies, NAACL-HLT 2019, Minneapolis, MN, vative Applications of Artificial ...
In Proceedings of the 2019 Conference
1823
of the North American Chapter of the Association Proceedings of the Thirty-Second AAAI Conference ...
doi:10.18653/v1/2021.findings-emnlp.156
fatcat:kx7dmxxtbnfmla3ld3z6nsoilq
Simple, Interpretable and Stable Method for Detecting Words with Usage Change across Corpora
2020
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
unpublished
The problem of comparing two bodies of text and searching for words that differ in their usage between them arises often in digital humanities and computational social science. ...
We demonstrate its effectiveness in 9 different setups, considering different corpus splitting criteria (age, gender and profession of tweet authors, time of tweet) and different languages (English, French ...
the Israeli ministry of Science, Technology and Space through the Israeli-French Maimonide Cooperation programme. ...
doi:10.18653/v1/2020.acl-main.51
fatcat:v6nklop67vcwznw3i5sriadbsu
What Have We Achieved on Text Summarization?
2020
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
unpublished
author. 1 MQM is a framework for declaring and describing human writing quality which stipulates a hierarchical listing of error types restricted to human writing and translation. ...
pre-training techniques, and in particular sequence-to-sequence pre-training, are highly effective for improving text summarization, with BART giving the best results. * Equal contribution. † Corresponding ...
Acknowledge We thank all anonymous reviewers for their constructive comments. This work is supported by NSFC 61976180 and a research grant from Tencent Inc. ...
doi:10.18653/v1/2020.emnlp-main.33
fatcat:xh3i4l676baq5kljtkcjmjjxna
Perturbation CheckLists for Evaluating NLG Evaluation Metrics
2021
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
unpublished
and Wei- NAACL-HLT 2019, Minneapolis, MN, USA, June 2-
Jing Zhu. 2002. ...
Association for Compu- In Proceedings of the 2019 Conference of the North
tational Linguistics. ...
doi:10.18653/v1/2021.emnlp-main.575
fatcat:nvicwmagqrfx7malsmalooj67i
Natural Language Processing and Information Extraction
2021
This thesis presents original research in the subject of Machine Learning and more specifically in the fields of Natural Language Processing and Information Extraction. ...
Capitalizing on the importance of semantic entities, we present two methodologies to incorporate coreferent information in Language Modeling. ...
In Proceedings of the 2021 Conference
of the North American Chapter of the Association for Computational Linguistics: Human
Language Technologies, pages 2523–2544. ...
doi:10.26262/heal.auth.ir.334427
fatcat:xnmddj3t7jg7poadfwrsxqsi6a