Filters








395 Hits in 5.4 sec

Probing What Different NLP Tasks Teach Machines about Function Word Comprehension [article]

Najoung Kim, Roma Patel, Adam Poliak, Alex Wang, Patrick Xia, R. Thomas McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, Ellie Pavlick
2019 arXiv   pre-print
Overall, no pretraining objective dominates across the board, and our function word probing tasks highlight several intuitive differences between pretraining objectives, e.g., that NLI helps the comprehension  ...  These tasks are created by structurally mutating sentences from existing datasets to target the comprehension of specific types of function words (e.g., prepositions, wh-words).  ...  Conclusion We propose a new challenge set of nine tasks that focus on probing function word comprehension.  ... 
arXiv:1904.11544v2 fatcat:bapnyak2bza3hdlqwa3kdx4aqe

What Makes it Difficult to Understand a Scientific Literature? [article]

Mengyun Cao, Jiao Tian, Dezhi Cheng, Jin Liu, Xiaoping Sun
2015 arXiv   pre-print
To this end, we conducted a reading comprehension test on two scientific papers which are written in different styles.  ...  In order to achieve this ideal, researchers of computer science have put forward a lot of models and algorithms attempting at enabling the machine to analyze and process human natural language on different  ...  Some of tasks in NLP are used to solve syntax/grammar analysis tasks, such as word segmentation, co-reference resolution, named entity recognition, Part-of-Speech (PoS) tagging, etc.  ... 
arXiv:1512.01409v1 fatcat:q4azfn5nxbhnhcd3myocebly6e

A Primer in BERTology: What we know about how BERT works [article]

Anna Rogers, Olga Kovaleva, Anna Rumshisky
2020 arXiv   pre-print
Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited.  ...  We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization  ...  Most BERT analysis papers focus on different probes of the model, with the goal to find what the language model "knows".  ... 
arXiv:2002.12327v3 fatcat:jfv6tmzzhrazrjhynwjbyiewkm

Probing the Probing Paradigm: Does Probing Accuracy Entail Task Relevance? [article]

Abhilasha Ravichander, Yonatan Belinkov, Eduard Hovy
2021 arXiv   pre-print
Although neural models have achieved impressive results on several NLP benchmarks, little is understood about the mechanisms they use to perform language tasks.  ...  However, to what extent was the information encoded in sentence representations, as discovered through a probe, actually used by the model to perform its task?  ...  SentEval probing tasks.  ... 
arXiv:2005.00719v3 fatcat:ow5iphznjbflhdb5islyddh77a

A Primer in BERTology: What We Know About How BERT Works

Anna Rogers, Olga Kovaleva, Anna Rumshisky
2020 Transactions of the Association for Computational Linguistics  
Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited.  ...  We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization  ...  Most BERT analysis papers focus on different probes of the model, with the goal to find what the language model ''knows''.  ... 
doi:10.1162/tacl_a_00349 fatcat:5ks5emvhavfmteskmbaznz6i4y

A Prompting-based Approach for Adversarial Example Generation and Robustness Enhancement [article]

Yuting Yang, Pei Huang, Juan Cao, Jintao Li, Yun Lin, Jin Song Dong, Feifei Ma, Jian Zhang
2022 arXiv   pre-print
Our work indicates that prompting paradigm has great potential in probing some fundamental flaws of PLMs and fine-tuning them for downstream tasks.  ...  Our attack technique targets the inherent vulnerabilities of NLP models, allowing us to generate samples even without interacting with the victim NLP model, as long as it is based on pre-trained language  ...  Such vulnerability has been exposed in many NLP tasks including text classification [Jin et al., 2020] , machine translation [Zhang et al., 2021] , dependency parsing [Zheng et al., 2020] , reading  ... 
arXiv:2203.10714v1 fatcat:kmp53fxw7rb3dazsk2gv7v7wza

Empirical Evaluation and Theoretical Analysis for Representation Learning: A Survey [article]

Kento Nozawa, Issei Sato
2022 arXiv   pre-print
Representation learning enables us to automatically extract generic feature representations from a dataset to solve another machine learning task.  ...  Recently, extracted feature representations by a representation learning algorithm and a simple predictor have exhibited state-of-the-art performance on several machine learning tasks.  ...  We refer to comprehensive review papers [33, 34, 35] for more details about knowledge graph representation learning.  ... 
arXiv:2204.08226v1 fatcat:bw6orocgjjb2niyttv6g6hivba

Machine Reading Comprehension: The Role of Contextualized Language Models and Beyond [article]

Zhuosheng Zhang, Hai Zhao, Rui Wang
2020 arXiv   pre-print
Machine reading comprehension (MRC) aims to teach machines to read and comprehend human languages, which is a long-standing goal of natural language processing (NLP).  ...  architecture from the insights of the cognitive process of humans; 5) previous highlights, emerging topics, and our empirical analysis, among which we especially focus on what works in different periods  ...  There are heated discussions about what CLM models learn re- cently.  ... 
arXiv:2005.06249v1 fatcat:htdq7hk6mrghvknwbkchgdioku

QA Dataset Explosion: A Taxonomy of NLP Resources for Question Answering and Reading Comprehension [article]

Anna Rogers, Matt Gardner, Isabelle Augenstein
2021 arXiv   pre-print
Question answering and reading comprehension have been particularly prolific in this regard, with over 80 new datasets appearing in the past two years.  ...  Alongside huge volumes of research on deep learning models in NLP in the recent years, there has been also much work on benchmark datasets needed to track modeling progress.  ...  "Visual QA" means "answering questions about images. Similarly, there is a task for QA about audio clips.  ... 
arXiv:2107.12708v1 fatcat:sfwmrimlgfg4xkmmca6wspec7i

DirectProbe: Studying Representations without Classifiers [article]

Yichu Zhou, Vivek Srikumar
2021 arXiv   pre-print
In this work, we argue that doing so can be unreliable because different representations may need different classifiers.  ...  Understanding how linguistic structures are encoded in contextualized embedding could help explain their impressive performance across NLP@.  ...  Acknowledgments We thank the members of the Utah NLP group and Nathan Schneider for discussions and valuable insights, and reviewers for their helpful feedback.  ... 
arXiv:2104.05904v1 fatcat:ntioz6heozasxejrmj27kgbimi

Equity Beyond Bias in Language Technologies for Education

Elijah Mayfield, Michael Madaio, Shrimai Prabhumoye, David Gerritsen, Brittany McLaughlin, Ezekiel Dixon-Román, Alan W Black
2019 Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications  
Here, we introduce concepts from culturally relevant pedagogy and other frameworks for teaching and learning, and identify future work on equity in NLP.  ...  As machine learning researchers begin to study fairness and bias in earnest, language technologies in education have an unusually strong theoretical and applied foundation to build on.  ...  Playing with the data: What legal scholars should learn about machine learning. UCDL Rev., 51:653. Jochen L Leidner and Vassilis Plachouras. 2017.  ... 
doi:10.18653/v1/w19-4446 dblp:conf/bea/MayfieldMPGMDB19 fatcat:gp3vr6cwqfh63pvifmv2k3r3iy

Word meaning in minds and machines [article]

Brenden M. Lake, Gregory L. Murphy
2021 arXiv   pre-print
In this article, we compare how humans and machines represent the meaning of words.  ...  Machines have achieved a broad and growing set of linguistic competencies, thanks to recent progress in Natural Language Processing (NLP).  ...  Lake's contribution was partially funded by NSF Award 1922658 NRT-HDR: FUTURE Foundations, Translation, and Responsibility for Data Science, and DARPA Award A000011479; PO: P000085618 for the Machine Common  ... 
arXiv:2008.01766v3 fatcat:vi4zp7ebxfcepesrz5b2vkxdcu

Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey [article]

Bonan Min, Hayley Ross, Elior Sulem, Amir Pouran Ben Veyseh, Thien Huu Nguyen, Oscar Sainz, Eneko Agirre, Ilana Heinz, Dan Roth
2021 arXiv   pre-print
We present a survey of recent work that uses these large language models to solve NLP tasks via pre-training then fine-tuning, prompting, or text generation approaches.  ...  Large, pre-trained transformer-based language models such as BERT have drastically changed the Natural Language Processing (NLP) field.  ...  A more comprehensive list of prior work using different pre-training then fine-tuning strategies are in Table 8 (Appendix B).  ... 
arXiv:2111.01243v1 fatcat:4xfjkkby2bfnhdrhmrdlliy76m

Pre-trained Models for Natural Language Processing: A Survey [article]

Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, Xuanjing Huang
2020 arXiv   pre-print
Recently, the emergence of pre-trained models (PTMs) has brought natural language processing (NLP) to a new era. In this survey, we provide a comprehensive review of PTMs for NLP.  ...  This survey is purposed to be a hands-on guide for understanding, using, and developing PTMs for various NLP tasks.  ...  Question Answering Question answering (QA), or a narrower concept machine reading comprehension (MRC), is an important application in the NLP community.  ... 
arXiv:2003.08271v3 fatcat:ze64wcfecfgs7bguq4vajpsgpu

Framework for Deep Learning-Based Language Models using Multi-task Learning in Natural Language Understanding: A Systematic Literature Review and Future Directions

Rahul Manohar Samant, Mrinal Bachute, Shilpa Gite, Ketan Kotecha
2022 IEEE Access  
NLU (Natural Language Understanding) is a subset of NLP including tasks, like machine translation, dialogue-based systems, natural language inference, text entailment, sentiment analysis, etc.  ...  Unfortunately, these models cannot be generalized for all the NLP tasks with similar performance.  ...  Machine reading comprehension is the task of reading and comprehending the text by a computer program.  ... 
doi:10.1109/access.2022.3149798 fatcat:k3kdt4eryzdfpk5k6w62jtlskm
« Previous Showing results 1 — 15 out of 395 results