Filters








19,225 Hits in 5.0 sec

Evaluating Models' Local Decision Boundaries via Contrast Sets [article]

Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi (+14 others)
2020 arXiv   pre-print
Although our contrast sets are not explicitly adversarial, model performance is significantly lower on them than on the original test sets---up to 25\% in some cases.  ...  We demonstrate the efficacy of contrast sets by creating them for 10 diverse NLP datasets (e.g., DROP reading comprehension, UD parsing, IMDb sentiment analysis).  ...  We created contrast sets for 10 NLP datasets and released this data as new evaluation benchmarks.  ... 
arXiv:2004.02709v2 fatcat:zwreyqnxiveyvpktpwazmczfv4

On Robustness and Bias Analysis of BERT-based Relation Extraction [article]

Luoqiu Li, Xiang Chen, Hongbin Ye, Zhen Bi, Shumin Deng, Ningyu Zhang, Huajun Chen
2021 arXiv   pre-print
Fine-tuning pre-trained models have achieved impressive performance on standard natural language processing benchmarks. However, the resultant model generalizability remains poorly understood.  ...  In this study, we analyze a fine-tuned BERT model from different perspectives using relation extraction.  ...  definitions of NLP generalization.  ... 
arXiv:2009.06206v5 fatcat:65yuns2qe5auzer7xuuv7tjuli

Automatic coding of students' writing via Contrastive Representation Learning in the Wasserstein space [article]

Ruijie Jiang, Julia Gouvea, David Hammer, Eric Miller, Shuchin Aeron
2020 arXiv   pre-print
contrastive learning set-up.  ...  ) model for capturing language generation as a state-space model, is able to quantitatively capture the scoring, with a high Quadratic Weighted Kappa (QWK) prediction score, when trained in via a novel  ...  Given this set-up, our approach is to learn useful representations via contrastive learning using the triplet loss [21, 22] .  ... 
arXiv:2011.13384v2 fatcat:ybywlsghfrcpzhlgkxj7aajfaq

Astraea: Grammar-based Fairness Testing [article]

Ezekiel Soremekun and Sakshi Udeshi and Sudipta Chattopadhyay
2022 arXiv   pre-print
Furthermore, ASTRAEA improves software fairness by ~76%, via model-retraining.  ...  ASTRAEA was evaluated on 18 software systems that provide three major natural language processing (NLP) services. In our evaluation, ASTRAEA generated fairness violations with a rate of ~18%.  ...  We implement ASTRAEA and evaluate it on a total of 18 models for a variety of NLP tasks.  ... 
arXiv:2010.02542v5 fatcat:n6ka7pbchrdczpnsgcjpomybfm

Contrastive Demonstration Tuning for Pre-trained Language Models [article]

Xiaozhuan Liang, Ningyu Zhang, Siyuan Cheng, Zhen Bi, Zhenru Zhang, Chuanqi Tan, Songfang Huang, Fei Huang, Huajun Chen
2022 arXiv   pre-print
Pretrained language models can be effectively stimulated by textual prompts or demonstrations, especially in low-data scenarios.  ...  In this paper, we propose a novel pluggable, extensible, and efficient approach named contrastive demonstration tuning, which is free of demonstration sampling.  ...  Settings Evaluation RoBERTa LARGE as pretrained language model and set K = 16. We employ AdamW as the optimizer and set same learning rate as 1e − 5 and batch size as 8 to all tasks.  ... 
arXiv:2204.04392v2 fatcat:oczxkefaz5fw3mo3moachgb4dq

Development and web deployment of an automated neuroradiology MRI protocoling tool with natural language processing

Yeshwant Reddy Chillakuru, Shourya Munjal, Benjamin Laguna, Timothy L. Chen, Gunvant R. Chaudhari, Thienkhai Vu, Youngho Seo, Jared Narvid, Jae Ho Sohn
2021 BMC Medical Informatics and Decision Making  
We aim to develop, evaluate, and deploy an NLP model that automates protocol assignment, given the clinician indication text.  ...  . fastText and XGBoost were used to develop 2 NLP models to classify spine and head MRI protocols, respectively.  ...  The training and test set evaluation code is available at https:// bit. ly/ 2ytg4 FL.  ... 
doi:10.1186/s12911-021-01574-y pmid:34253196 fatcat:iapeobf5mjekbkuf3qhhixutl4

GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing [article]

Jian Guo, He He, Tong He, Leonard Lausen, Mu Li, Haibin Lin, Xingjian Shi, Chenguang Wang, Junyuan Xie, Sheng Zha, Aston Zhang, Hang Zhang (+4 others)
2020 arXiv   pre-print
These toolkits provide state-of-the-art pre-trained models, training scripts, and training logs, to facilitate rapid prototyping and promote reproducible research.  ...  Leveraging the MXNet ecosystem, the deep learning models in GluonCV and GluonNLP can be deployed onto a variety of platforms with different programming languages.  ...  Specifically, we evaluate popular or state-of-the-art models on standard benchmark data sets.  ... 
arXiv:1907.04433v2 fatcat:62ptghynwfbxnbbtxgnhxpplu4

STAMP 4 NLP – An Agile Framework for Rapid Quality-Driven NLP Applications Development [chapter]

Philipp Kohl, Oliver Schmidts, Lars Klöser, Henri Werth, Bodo Kraft, Albert Zündorf
2021 Communications in Computer and Information Science  
We introduce STAMP 4 NLP as an iterative and incremental process model for developing NLP applications.  ...  Instantiating our process model allows efficiently creating prototypes by utilizing templates, conventions, and implementations, enabling developers and data scientists to focus on the business goals.  ...  test environment, model evaluation, publish results via a dashboard.  ... 
doi:10.1007/978-3-030-85347-1_12 fatcat:6qlgra2tqngfvpwgsxrm6qlibm

Disentangled Contrastive Learning for Learning Robust Textual Representations [article]

Xiang Chen, Xin Xie, Zhen Bi, Hongbin Ye, Shumin Deng, Ningyu Zhang, Huajun Chen
2021 arXiv   pre-print
Although the self-supervised pre-training of transformer models has resulted in the revolutionizing of natural language processing (NLP) applications and the achievement of state-of-the-art results with  ...  In this study, we propose a disentangled contrastive learning method that separately optimizes the uniformity and alignment of representations without negative sampling.  ...  GLUE [26] is an NLP benchmark aimed at evaluating the performance of downstream tasks of the pre-trained models.  ... 
arXiv:2104.04907v2 fatcat:df2h3uywpreqfm55k7lxc2253m

Metamorphic Testing and Certified Mitigation of Fairness Violations in NLP Models

Pingchuan Ma, Shuai Wang, Jin Liu
2020 Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence  
Furthermore, inspired by recent breakthroughs in the certified robustness of machine learning, we formulate NLP model fairness in a practical setting as (ε, k)-fairness and accordingly smooth the model  ...  We further enhance the evaluated models by adding certified fairness guarantee at a modest cost.  ...  We further enhance the evaluated (commercial) NLP models w.r.t. the certified guarantees at a modest cost.  ... 
doi:10.24963/ijcai.2020/64 dblp:conf/ijcai/0004WL20 fatcat:ylq3nmlbzjdu3jcj7yel3dzghy

Character-Level Neural Language Modelling in the Clinical Domain

Markus Kreuzthaler, Michel Oleynik, Stefan Schulz
2020 Studies in Health Technology and Informatics  
After the training phase we accessed the top 10 most similar character-induced word embeddings related to a clinical concept via a nearest neighbour search and evaluated the expected interconnected semantics  ...  The results support recent work on general language modelling that raised the question whether token-based representation schemes are still necessary for specific NLP tasks.  ...  The work presented here uses a single LSTM network in contrast to more complex models from recent research on deep transformer models like BERT [7] or stacked Bi-LSTMs like ELMo [8] for contextual  ... 
doi:10.3233/shti200127 pmid:32570351 fatcat:xhsr2prk3bhoxemgqkyfan2d7m

Improved Text Classification via Contrastive Adversarial Training [article]

Lin Pan, Chung-Wei Hang, Avirup Sil, Saloni Potdar
2022 arXiv   pre-print
Specifically, during fine-tuning we generate adversarial examples by perturbing the word embeddings of the model and perform contrastive learning on clean and adversarial examples in order to teach the  ...  model to learn noise-invariant representations.  ...  We use strong baseline models and evaluate our method on a range of GLUE benchmark tasks and three intent classification datasets in different settings.  ... 
arXiv:2107.10137v2 fatcat:jcp7wkmorvah7doqoorvqz6h3q

Data-Efficient Pretraining via Contrastive Self-Supervision [article]

Nils Rethmeier, Isabelle Augenstein
2021 arXiv   pre-print
In this work, we evaluate against three core challenges for resource efficient learning.  ...  For natural language processing 'text-to-text' tasks, the prevailing approaches heavily rely on pretraining large self-supervised models on increasingly larger 'task-external' data.  ...  Like T5 (Raffel et al., 2020) , CLESS models arbitrary NLP tasks as 'text-to-text' prediction, but extends on T5 via (a) data efficient contrastive self-supervision and (b) by performing 'text-to-text  ... 
arXiv:2010.01061v4 fatcat:tml3ujz5ezdxbmn2i2reklyenu

A Primer on Contrastive Pretraining in Language Processing: Methods, Lessons Learned and Perspectives [article]

Nils Rethmeier, Isabelle Augenstein
2021 arXiv   pre-print
For this reason, some contrastive NLP pretraining methods contrast over input-label pairs, rather than over input-input pairs, using methods from Metric Learning and Energy Based Models.  ...  In this survey, we summarize recent self-supervised and supervised contrastive NLP pretraining methods and describe where they are used to improve language modeling, few or zero-shot learning, pretraining  ...  During pretraining, they learn to contrast real data text continuations and language model generated text continuations via conditional NCE from §2.1.  ... 
arXiv:2102.12982v1 fatcat:ivzglgl3zvczddywwwjdqewkmi

Applying Recent Innovations from NLP to MOOC Student Course Trajectory Modeling [article]

Clarence Chen, Zachary Pardos
2020 arXiv   pre-print
This paper presents several strategies that can improve neural network-based predictive methods for MOOC student course trajectory modeling, applying multiple ideas previously applied to tackle NLP (Natural  ...  entropy of each dataset as a set of discrete random processes via fitting a HMM (Hidden Markov Model) to each dataset.  ...  Table 2 provides the full set of hyperparameters used for training and evaluating each model on each course record dataset.  ... 
arXiv:2001.08333v2 fatcat:bm4rxrd7j5bn5hhv5ykhcrmkia
« Previous Showing results 1 — 15 out of 19,225 results