A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Deep Learning for Text Style Transfer: A Survey
2021
Computational Linguistics
We also provide discussions on a variety of important topics regarding the future development of this task. ...
We discuss the task formulation, existing datasets and subtasks, evaluation, as well as the rich methodologies in the presence of parallel and non-parallel data. ...
Language Processing, ACL/IJCNLP 2021,
Lee, Joosung. 2020. ...
doi:10.1162/coli_a_00426
fatcat:v7vmb62ckfcu5k5mpu2pydnrxy
Few-Shot Table-to-Text Generation with Prototype Memory
2021
Findings of the Association for Computational Linguistics: EMNLP 2021
unpublished
In Findings of the Association for Com-
two-stage model for low resource table-to-text gen- putational Linguistics: ACL/IJCNLP 2021, Online
eration. ...
In Proceedings of the 57th Conference of Event, August 1-6, 2021, pages 560–569. ...
doi:10.18653/v1/2021.findings-emnlp.77
fatcat:2cctb4aqnzfslgw7rtn73cxr2u
Keyphrase Generation with Fine-Grained Evaluation-Guided Reinforcement Learning
2021
Findings of the Association for Computational Linguistics: EMNLP 2021
unpublished
1: Long Papers), Vir-
tual Event, August 1-6, 2021, pages 4598–4608. ...
In Proceedings of the
59th Annual Meeting of the Association for Com-
putational Linguistics and the 11th International
Joint Conference on Natural Language Processing,
ACL/IJCNLP 2021, (Volume ...
doi:10.18653/v1/2021.findings-emnlp.45
fatcat:5s7tdmq5cnh5bkey45jaysgyhy
DoCoGen: Domain Counterfactual Generation for Low Resource Domain Adaptation
[article]
2022
arXiv
pre-print
Our model outperforms strong baselines and improves the accuracy of a state-of-the-art unsupervised DA algorithm. ...
Given an input text example, our DoCoGen algorithm generates a domain-counterfactual textual example (D-con) - that is similar to the original in all aspects, including the task label, but its domain is ...
Acknowledgements We would like to thank the action editor and the reviewers, as well as the members of the IE@Technion NLP group for their valuable feedback and advice. ...
arXiv:2202.12350v2
fatcat:7uomvkwjuvcctpghiy5kxbcnpe
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little
[article]
2021
arXiv
pre-print
A possible explanation for the impressive performance of masked language model (MLM) pre-training is that such models have learned to represent the syntactic structures prevalent in classical NLP pipelines ...
linguistic knowledge. ...
We also thank the anonymous reviewers for their constructive feedback during the reviewing phase, which helped polish the paper to its current state. ...
arXiv:2104.06644v2
fatcat:6arq2lp37zbctln637ommaiqvu
Deep Learning for Text Style Transfer: A Survey
[article]
2021
arXiv
pre-print
We discuss the task formulation, existing datasets and subtasks, evaluation, as well as the rich methodologies in the presence of parallel and non-parallel data. ...
We also provide discussions on a variety of important topics regarding the future development of this task. Our curated paper list is at https://github.com/zhijing-jin/Text_Style_Transfer_Survey ...
Language Processing, ACL/IJCNLP 2021,
Lee, Joosung. 2020. ...
arXiv:2011.00416v5
fatcat:wfw3jfh2mjfupbzrmnztsqy4ny
Contrastive Learning of Sociopragmatic Meaning in Social Media
[article]
2022
arXiv
pre-print
Our framework outperforms other contrastive learning frameworks for both in-domain and out-of-domain data, across both the general and few-shot settings. ...
Recent progress in representation and contrastive learning in NLP has not widely considered the class of sociopragmatic meaning (i.e., meaning in interaction within different language communities). ...
Canada (SSHRC; 435-2018-0576; 895-2020-1004; 895-2021-1008), Canadian Foundation for Innovation (CFI; 37771), Compute Canada (CC), 12 and UBC ARC-Sockeye. 13 Any opinions, conclusions or recommendations ...
arXiv:2203.07648v2
fatcat:6zmhiogvirdlznoaqonyuesc54
UNKs Everywhere: Adapting Multilingual Language Models to New Scripts
[article]
2021
arXiv
pre-print
Furthermore, we show that learning of the new dedicated embedding matrix in the target language can be improved by leveraging a small number of vocabulary items (i.e., the so-called lexically overlapping ...
We also demonstrate that they can yield improvements for low-resource languages written in scripts covered by the pretrained model. ...
We thank Laura Rimell, Nils Reimers, Michael Bugert and the anonymous reviewers for insightful feedback and suggestions on a draft of this paper. ...
arXiv:2012.15562v3
fatcat:4zrgue5xyfbu5ft4fun5jaspx4
A Primer on Pretrained Multilingual Language Models
[article]
2021
arXiv
pre-print
Multilingual Language Models () such as mBERT, XLM, XLM-R, etc. have emerged as a viable option for bringing the power of pretraining to a large number of languages. ...
variety of tasks and languages for evaluating (iii) analysing the performance of on monolingual, zero-shot cross-lingual and bilingual tasks (iv) understanding the universal language patterns (if any) ...
CoRR,
ACL/IJCNLP 2021, (Volume 1: Long Papers), Vir- abs/2004.14327.
tual Event, August 1-6, 2021, pages 3118–3135. As- Aaron van den Oord, Yazhe Li, and Oriol Vinyals. ...
arXiv:2107.00676v2
fatcat:jvvt6wwitvg2lc7bmttvv3aw6m
Paradigm Shift in Natural Language Processing
[article]
2021
arXiv
pre-print
In the era of deep learning, modeling for most NLP tasks has converged to several mainstream paradigms. ...
For example, we usually adopt the sequence labeling paradigm to solve a bundle of tasks such as POS-tagging, NER, Chunking, and adopt the classification paradigm to solve tasks like sentiment analysis. ...
), Virtual Event, August 1-6, 2021, pages 7183-7195. ...
arXiv:2109.12575v1
fatcat:vckeva3u3va3vjr6okhuztox4y
Few-shot Learning with Multilingual Language Models
[article]
2021
arXiv
pre-print
We present a detailed analysis of where the model succeeds and fails, showing in particular that it enables cross-lingual in-context learning on some tasks, while there is still room for improvement on ...
On the FLORES-101 machine translation benchmark, our model outperforms GPT-3 on 171 out of 182 translation directions with 32 training examples, while surpassing the official supervised baseline in 45 ...
guistics: ACL/IJCNLP 2021, Online Event, August
1-6, 2021, volume ACL/IJCNLP 2021 of Findings Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali
of ACL, pages 3534–3546. ...
arXiv:2112.10668v1
fatcat:ehexgbyr5jfetimihdd66sxdtm
ElitePLM: An Empirical Study on General Language Ability Evaluation of Pretrained Language Models
[article]
2022
arXiv
pre-print
Moreover, the prediction results of PLMs in our experiments are released as an open resource for more deep and detailed analysis on the language abilities of PLMs. ...
This paper can guide the future work to select, apply, and design PLMs for specific tasks. We have made all the details of experiments publicly available at https://github.com/RUCAIBox/ElitePLM. ...
Association for Computational Linguistics. sented in Table 10. 2021, Online Event, August 1-6, 2021, volume ACL/IJCNLP 2021 of Findings of ACL, pages 1558- Reasoning Tests. ...
arXiv:2205.01523v1
fatcat:d2qusgoj75aefa32btqgtkybdi
Evaluating Explanations: How much do explanations from the teacher aid students?
[article]
2021
arXiv
pre-print
In this work, we introduce a framework to quantify the value of explanations via the accuracy gains that they confer on a student model trained to simulate a teacher model. ...
Crucially, the explanations are available to the student during training, but are not available at test time. ...
Findings Kayo Yin, Patrick Fernandes, Danish Pruthi, Aditi
of the Association for Computational Linguistics: Chaudhary, André F. T. ...
arXiv:2012.00893v2
fatcat:czmvmj4525fcdffimhaxqxtgdu
DoCoGen: Domain Counterfactual Generation for Low Resource Domain Adaptation
2022
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
unpublished
Our model outperforms strong baselines and improves the accuracy of a state-of-the-art unsupervised DA algorithm. 1 ...
Given an input text example, our DoCoGen algorithm generates a domain-counterfactual textual example (D-CON) -that is similar to the original in all aspects, including the task label, but its domain is ...
Acknowledgements We would like to thank the action editor and the reviewers, as well as the members of the IE@Technion NLP group for their valuable feedback and advice. ...
doi:10.18653/v1/2022.acl-long.533
fatcat:lkukck3wn5d4hdcbn6xidk7smm
Theoretical and Practical Issues in the Semantic Annotation of Four Indigenous Languages
2021
Proceedings of The Joint 15th Linguistic Annotation Workshop (LAW) and 3rd Designing Meaning Representations (DMR) Workshop
unpublished
In Findings of the Association for Com-
putational Linguistics: ACL-IJCNLP 2021, pages
Deng Cai and Wai Lam. 2020. ...
In Proceedings of the Linguistics.
58th Annual Meeting of the Association for Compu-
tational Linguistics, pages 1290–1301, Online. ...
doi:10.18653/v1/2021.law-1.2
fatcat:k5zsmrpu7rbdldjc2qleaxuhmi
« Previous
Showing results 1 — 15 out of 19 results