Filters








69 Hits in 10.8 sec

What to Pre-Train on? Efficient Intermediate Task Selection [article]

Clifton Poth, Jonas Pfeiffer, Andreas Rücklé, Iryna Gurevych
2021 arXiv   pre-print
Our best methods achieve an average Regret@3 of less than 1% across all target tasks, demonstrating that we are able to efficiently identify the best datasets for intermediate training.  ...  With an abundance of candidate datasets as well as pre-trained language models, it has become infeasible to run the cross-product of all combinations to find the best transfer setting.  ...  We thank Leonardo Ribeiro and the anonymous reviewers for insightful feedback and suggestions on a draft of this paper.  ... 
arXiv:2104.08247v2 fatcat:4ljcfshev5f3tmgugrrrkh3s4m

Deep Learning for Text Style Transfer: A Survey [article]

Di Jin, Zhijing Jin, Zhiting Hu, Olga Vechtomova, Rada Mihalcea
2021 arXiv   pre-print
We also provide discussions on a variety of important topics regarding the future development of this task. Our curated paper list is at https://github.com/zhijing-jin/Text_Style_Transfer_Survey  ...  In this paper, we present a systematic survey of the research on neural text style transfer, spanning over 100 representative articles since the first neural text style transfer work in 2017.  ...  Empirical Methods in Natural Language In Proceedings of the 2019 Conference of the Processing, EMNLP 2020, Online, November North American Chapter of the Association for 16-20  ... 
arXiv:2011.00416v5 fatcat:wfw3jfh2mjfupbzrmnztsqy4ny

Graph Neural Networks for Natural Language Processing: A Survey [article]

Lingfei Wu, Yu Chen, Kai Shen, Xiaojie Guo, Hanning Gao, Shucheng Li, Jian Pei, Bo Long
2021 arXiv   pre-print
As a result, thereis a surge of interests in developing new deep learning techniques on graphs for a large numberof NLP tasks.  ...  Finally, we discussvarious outstanding challenges for making the full use of GNNs for NLP as well as future researchdirections.  ...  Liu, editors, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 8823–8838.  ... 
arXiv:2106.06090v1 fatcat:zvkhinpcvzbmje4kjpwjs355qu

Deep Learning for Text Style Transfer: A Survey

Di Jin, Zhijing Jin, Zhiting Hu, Olga Vechtomova, Rada Mihalcea
2021 Computational Linguistics  
We also provide discussions on a variety of important topics regarding the future development of this task.  ...  In this paper, we present a systematic survey of the research on neural text style transfer, spanning over 100 representative articles since the first neural text style transfer work in 2017.  ...  EMNLP 2020, Online Event, 16-20 November Controlling linguistic style aspects in 2020, pages 1307–1323, Association for neural language generation.  ... 
doi:10.1162/coli_a_00426 fatcat:v7vmb62ckfcu5k5mpu2pydnrxy

Reinforcement Learning-based Dialogue Guided Event Extraction to Exploit Argument Relations [article]

Qian Li, Hao Peng, Jianxin Li, Jia Wu, Yuanxing Ning, Lihong Wang, Philip S. Yu, Zheng Wang
2021 arXiv   pre-print
Event extraction is a fundamental task for natural language processing. Finding the roles of event arguments like event participants is essential for event extraction.  ...  Experimental results show that our approach consistently outperforms seven state-of-the-art event extraction methods for the classification of events and argument role and argument identification.  ...  of the Fourth Workshop on Structured Prediction for NLP@EMNLP 2020, Online, November 20, 2020, pp. 74–83, 2020.  ... 
arXiv:2106.12384v2 fatcat:blyylym77vdupbrolil2dtmrna

HateCheck: Functional Tests for Hate Speech Detection Models [article]

Paul Röttger, Bertram Vidgen, Dong Nguyen, Zeerak Waseem, Helen Margetts, Janet B. Pierrehumbert
2021 arXiv   pre-print
Detecting online hate is a difficult task that even state-of-the-art models struggle with.  ...  We craft test cases for each functionality and validate their quality through a structured annotation process.  ...  In Proceedings of the Fourth Workshop on Online Abuse and Harms, pages 173-183, Online. Association for Computational Linguistics.  ... 
arXiv:2012.15606v2 fatcat:uq4e5gl6djga7iuszekimn5s64

AdapterFusion: Non-Destructive Task Composition for Transfer Learning [article]

Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, Iryna Gurevych
2021 arXiv   pre-print
We empirically evaluate AdapterFusion on 16 diverse NLU tasks, and find that it effectively combines various types of knowledge at different layers of the model.  ...  We show that by separating the two stages, i.e., knowledge extraction and knowledge composition, the classifier can effectively exploit the representations learned from multiple tasks in a non-destructive  ...  We thank Sebastian Ruder, Max Glockner, Jason Phang, Alex Wang, Katrina Evtimova and Sam Bowman for insightful feedback and suggestions on drafts of this paper.  ... 
arXiv:2005.00247v3 fatcat:rhjexrlidzcjtck5xjqcmaxmxe

NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation [article]

Kaustubh D. Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Srivastava, Samson Tan, Tongshuang Wu, Jascha Sohl-Dickstein (+113 others)
2021 arXiv   pre-print
Data augmentation is an important component in the robustness evaluation of models in natural language processing (NLP) and in enhancing the diversity of the data they are trained on.  ...  We describe the framework and an initial set of 117 transformations and 23 filters for a variety of natural language tasks.  ...  In Proceedings tions, ICLR 2019, New Orleans, LA, USA, May 6-9, of the Fourth Workshop on Discourse in Machine 2019. OpenReview.net.  ... 
arXiv:2112.02721v1 fatcat:uqizuxc4wzgxnnfsc6azh6ckpq

Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections [article]

Ruiqi Zhong, Kristy Lee, Zheng Zhang, Dan Klein
2021 arXiv   pre-print
For example, to classify sentiment without any training examples, we can "prompt" the LM with the review and the label description "Does the user like this movie?"  ...  Therefore, measuring zero-shot learning performance on language models out-of-the-box might underestimate their true potential, and community-wide efforts on aggregating datasets and unifying their formats  ...  We thank Steven Cao, David Gaddy, Haizhi Lai, Jacob Steinhardt, Kevin Yang and anonymous reviewers for their comments on the paper.  ... 
arXiv:2104.04670v5 fatcat:nicxnnusjjg3jdyqch6nzy7y5m

Towards Explainable Fact Checking [article]

Isabelle Augenstein
2021 arXiv   pre-print
, and, very recently, by legislation requiring online platforms operating in the EU to provide transparent reporting on their services.  ...  The past decade has seen a substantial rise in the amount of mis- and disinformation online, from targeted disinformation campaigns to influence politics, to the unintentional spreading of misinformation  ...  Proceedings of The Third Workshop on Representation Learning for NLP.  ... 
arXiv:2108.10274v2 fatcat:5s4an6irezcjfmvvhmiaeqarh4

DESYR: Definition and Syntactic Representation Based Claim Detection on the Web [article]

Megha Sundriyal, Parantak Singh, Md Shad Akhtar, Shubhashis Sengupta, Tanmoy Chakraborty
2021 arXiv   pre-print
Furthermore, the increase in the usage of online social media has resulted in an explosion of unsolicited information on the web presented as informal text.  ...  We see an increase of 3 claim-F1 points on the LESA-Twitter dataset, an increase of 1 claim-F1 point and 9 macro-F1 points on the Online Comments(OC) dataset, an increase of 24 claim-F1 points and 17 macro-F1  ...  In Proceedings of the Workshop on Language in Social Media (LSM 2011). [28] Andreas Peldszus and Manfred Stede. 2015.  ... 
arXiv:2108.08759v1 fatcat:bd5shqh7pbgsvncwuo7ohcawze

Detecting Fine-Grained Cross-Lingual Semantic Divergences without Supervision by Learning to Rank [article]

Eleftheria Briakou, Marine Carpuat
2020 arXiv   pre-print
This work improves the prediction and annotation of fine-grained semantic divergences.  ...  We evaluate our models on the Rationalized English-French Semantic Divergences, a new dataset released with this work, consisting of English-French sentence-pairs annotated with semantic divergence classes  ...  Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.  ... 
arXiv:2010.03662v1 fatcat:cqpq5cc7xfd7bc2ukkrfx5lwli

Unsupervised Distillation of Syntactic Information from Contextualized Word Representations [article]

Shauli Ravfogel, Yanai Elazar, Jacob Goldberger, Yoav Goldberg
2021 arXiv   pre-print
In this work, we tackle the task of unsupervised disentanglement between semantics and structure in neural language representations: we aim to learn a transformation of the contextualized vectors, that  ...  To this end, we automatically generate groups of sentences which are structurally similar but semantically different, and use metric-learning approach to learn a transformation that emphasizes the structural  ...  Acknowledgments We would like to thank Gal Chechik for providing valuable feedback on early version of this work.  ... 
arXiv:2010.05265v2 fatcat:ejfjlke7czeuphg73lx4vjiasa

WHOSe Heritage: Classification of UNESCO World Heritage "Outstanding Universal Value" Documents with Soft Labels [article]

Nan Bai, Renqian Luo, Pirouz Nourian, Ana Pereira Roders
2021 arXiv   pre-print
A human study with expert evaluation on the model prediction shows that the models are sufficiently generalizable.  ...  This study applies state-of-the-art NLP models to build a classifier on a new dataset containing Statements of OUV, seeking an explainable and scalable automation tool to facilitate the nomination, evaluation  ...  Acknowledgements The presented study is within the framework of the Heriland-Consortium.  ... 
arXiv:2104.05547v2 fatcat:yi7chwjrenbjnczyysjr7pukvy

A Survey of Deep Active Learning [article]

Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Brij B. Gupta, Xiaojiang Chen, Xin Wang
2021 arXiv   pre-print
Deep learning (DL) is greedy for data and requires a large amount of data supply to optimize massive parameters, so that the model learns how to extract high-quality features.  ...  Finally, we discussed the confusion and problems in DAL, and gave some possible development directions for DAL.  ...  In Proceedings of the International Conference on Computer Vision, Kerkyra, Corfu, Greece, September 20-25, 1999. 1150–1157. [140] Xiaoming Lv, Fajie Duan, Jiajia Jiang, Xiao Fu, and Lin Gan. 2020  ... 
arXiv:2009.00236v2 fatcat:zuk2doushzhlfaufcyhoktxj7e
« Previous Showing results 1 — 15 out of 69 results