Filters








4,060 Hits in 3.5 sec

Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation

Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, Benjamin Van Durme
2018 Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP  
The collection results from recasting 13 existing datasets from 7 semantic phenomena into a common NLI structure, resulting in over half a million labeled context-hypothesis pairs in total.  ...  Our collection of diverse datasets is available at http://www.decomp.net/, and will grow over time as additional resources are recast and added from novel sources.  ...  SciTail: A textual entailment dataset from science question answering. In AAAI.  ... 
doi:10.18653/v1/w18-5441 dblp:conf/emnlp/PoliakHRHPWD18a fatcat:jgh6i4foxrdajbzjndcojer7mi

Visuallly Grounded Generation of Entailments from Premises [article]

Somaye Jafaritazehjani and Albert Gatt and Marc Tanti
2019 arXiv   pre-print
The main goals of this paper are (a) to investigate whether it is reasonable to frame NLI as a generation task; and (b) to consider the degree to which grounding textual premises in visual information  ...  We compare different neural architectures, showing through automatic and human evaluation that entailments can indeed be generated successfully.  ...  First, in order to properly assess the contribution of grounded language models for entailment generation, it is necessary to design datasets in which the textual and visual modalities are complementary  ... 
arXiv:1909.09788v1 fatcat:u7qdfyzllfd23gwefsy5lowzk4

Visually grounded generation of entailments from premises

Somayeh Jafaritazehjani, Albert Gatt, Marc Tanti
2019 Proceedings of the 12th International Conference on Natural Language Generation  
The main goals of this paper are (a) to investigate whether it is reasonable to frame NLI as a generation task; and (b) to consider the degree to which grounding textual premises in visual information  ...  We compare different neural architectures, showing through automatic and human evaluation that entailments can indeed be generated successfully.  ...  First, in order to properly assess the contribution of grounded language models for entailment generation, it is necessary to design datasets in which the textual and visual modalities are complementary  ... 
doi:10.18653/v1/w19-8625 dblp:conf/inlg/Jafaritazehjani19 fatcat:mzlbjtvepjcpdcfszswae5i454

Infusing Knowledge into the Textual Entailment Task Using Graph Convolutional Networks [article]

Pavan Kapanipathi, Veronika Thost, Siva Sankalp Patel, Spencer Whitehead, Ibrahim Abdelaziz, Avinash Balakrishnan, Maria Chang, Kshitij Fadnis, Chulaka Gunasekara, Bassem Makni, Nicholas Mattei, Kartik Talamadupula (+1 others)
2019 arXiv   pre-print
Textual entailment is a fundamental task in natural language processing. Most approaches for solving the problem use only the textual content present in training data.  ...  We evaluate our approach on multiple textual entailment datasets and show that the use of external knowledge helps improve prediction accuracy.  ...  mapped concepts for the textual entailment task.  ... 
arXiv:1911.02060v2 fatcat:akslnlcywnbktmnul6oiepfa4y

Infusing Knowledge into the Textual Entailment Task Using Graph Convolutional Networks

Pavan Kapanipathi, Veronika Thost, Siva Sankalp Patel, Spencer Whitehead, Ibrahim Abdelaziz, Avinash Balakrishnan, Maria Chang, Kshitij Fadnis, Chulaka Gunasekara, Bassem Makni, Nicholas Mattei, Kartik Talamadupula (+1 others)
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Textual entailment is a fundamental task in natural language processing. Most approaches for solving this problem use only the textual content present in training data.  ...  We evaluate our approach on multiple textual entailment datasets and show that the use of external knowledge helps the model to be robust and improves prediction accuracy.  ...  these mapped concepts for the textual entailment task.  ... 
doi:10.1609/aaai.v34i05.6318 fatcat:ltjb5yjj2fem3freid6kc7fzne

Exploring Lexical Irregularities in Hypothesis-Only Models of Natural Language Inference [article]

Qingyuan Hu, Yi Zhang, Kanishka Misra, Julia Rayz
2021 arXiv   pre-print
Natural Language Inference (NLI) or Recognizing Textual Entailment (RTE) is the task of predicting the entailment relation between a pair of sentences (premise and hypothesis).  ...  In this work, we analyze hypothesis-only models trained on one of the recast datasets provided in Poliak et al. for word-level patterns.  ...  Besides proto-roles biases, our research reveals lexical biases toward non-entailment labels. We present the top 5 frequent words in the dev split of the dataset in TABLE VII.  ... 
arXiv:2101.07397v3 fatcat:4esbzyfgpfgyzkcipz3iap2vmq

Inference is Everything: Recasting Semantic Resources into a Unified Evaluation Framework

Aaron Steven White, Pushpendre Rastogi, Kevin Duh, Benjamin Van Durme
2017 International Joint Conference on Natural Language Processing  
We propose to unify a variety of existing semantic classification tasks, such as semantic role labeling, anaphora resolution, and paraphrase detection, under the heading of Recognizing Textual Entailment  ...  We demonstrate the value of this approach by investigating the behavior of a popular neural network RTE model.  ...  The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government.  ... 
dblp:conf/ijcnlp/WhiteRDD17 fatcat:5kn3ukugabhurllhbd6qhuhyem

Expanding textual entailment corpora fromWikipedia using co-training

Fabio Massimo Zanzotto, Marco Pennacchiotti
2010 Workshop on the People's Web Meets NLP  
In this paper we propose a novel method to automatically extract large textual entailment datasets homogeneous to existing ones.  ...  We report empirical evidence that our method successfully expands existing textual entailment corpora.  ...  Any feature space of those reported in the textual entailment literature could be applied.  ... 
dblp:conf/acl-pwnlp/ZanzottoP10 fatcat:jgemcwd4e5f25fzk7oxiubvdci

Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation

Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, Benjamin Van Durme
2018 Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing  
The collection results from recasting 13 existing datasets from 7 semantic phenomena into a common NLI structure, resulting in over half a million labeled context-hypothesis pairs in total.  ...  We present a large scale collection of diverse natural language inference (NLI) datasets that help provide insight into how well a sentence representation captures distinct types of reasoning.  ...  Acknowledgements We thank Diyi Yang for help with the Pun-sOfTheDay dataset, the JSALT "Sentence Representation" team for insightful discussions, and three anonymous reviewers for feedback.  ... 
doi:10.18653/v1/d18-1007 dblp:conf/emnlp/PoliakHRHPWD18 fatcat:baoa6zqnzzfklmexao77rmqkfe

Towards Debiasing NLU Models from Unknown Biases [article]

Prasetya Ajie Utama, Nafise Sadat Moosavi, Iryna Gurevych
2020 arXiv   pre-print
NLU models often exploit biases to achieve high dataset-specific performance without properly learning the intended task.  ...  In this work, we present the first step to bridge this gap by introducing a self-debiasing framework that prevents models from mainly utilizing biases without knowing them in advance.  ...  The third PASCAL recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 1-9, Prague.  ... 
arXiv:2009.12303v4 fatcat:to6ealbpvjftdc2ksrfzp6xqau

Two-Step Classification using Recasted Data for Low Resource Settings

Shagun Uppal, Vivek Gupta, Avinash Swaminathan, Haimin Zhang, Debanjan Mahata, Rakesh Gosangi, Rajiv Ratn Shah, Amanda Stent
2020 International Joint Conference on Natural Language Processing  
We study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions.  ...  To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create four NLI datasets from existing four text classification datasets in Hindi language.  ...  Textual Entailment One straightforward application of NLI comes with evaluating the task of Textual Entailment (TE).  ... 
dblp:conf/ijcnlp/UppalGSZMGSS20 fatcat:73qm2zv7j5duflvxctxnnyqk2u

Posterior Differential Regularization with f-divergence for Improving Model Robustness [article]

Hao Cheng, Xiaodong Liu, Lis Pereira, Yaoliang Yu, Jianfeng Gao
2021 arXiv   pre-print
In particular, with a proper f-divergence, a BERT-base model can achieve comparable generalization as its BERT-large counterpart for in-domain, adversarial and domain shift scenarios, indicating the great  ...  Additionally, we generalize the posterior differential regularization to the family of f-divergences and characterize the overall regularization framework in terms of Jacobian matrix.  ...  Following the literature, we report the exact match (EM) and F1 scores for QA datasets and classification accuracy for textual entailment and sentiment analysis.  ... 
arXiv:2010.12638v2 fatcat:dtzraw6mczen3bdurm7xkflr64

A Lexical Alignment Model for Probabilistic Textual Entailment [chapter]

Oren Glickman, Ido Dagan, Moshe Koppel
2006 Lecture Notes in Computer Science  
This paper describes the Bar-Ilan system participating in the Recognising Textual Entailment Challenge.  ...  Finally, we report the results of the model on the Recognising Textual Entailment challenge dataset along with some analysis.  ...  Acknowledgments This work was supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778.  ... 
doi:10.1007/11736790_16 fatcat:ulgsf26t2zhhbkmofwf2ncjaha

Hypothesis Only Baselines in Natural Language Inference

Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, Benjamin Van Durme
2018 Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics  
Our analysis suggests that statistical irregularities may allow a model to perform NLI in some datasets beyond what should be achievable without access to the context.  ...  Especially when an NLI dataset assumes inference is occurring based purely on the relationship between a context and a hypothesis, it follows that assessing entailment relations while ignoring the provided  ...  The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government.  ... 
doi:10.18653/v1/s18-2023 dblp:conf/starsem/PoliakNHRD18 fatcat:wd3z6g4z3fcnroqos7yj65z65e

A Survey on Recognizing Textual Entailment as an NLP Evaluation [article]

Adam Poliak
2020 arXiv   pre-print
Recognizing Textual Entailment (RTE) was proposed as a unified evaluation framework to compare semantic understanding of different NLP systems.  ...  We then focus our discussion on RTE by highlighting prominent RTE datasets as well as advances in RTE dataset that focus on specific linguistic phenomena that can be used to evaluate NLP systems on a fine-grained  ...  Table 2 provides such examples for both datasets. Rudinger et al. (2017) illustrated how eliciting textual data in this fashion creates stereotypical biases in SNLI.  ... 
arXiv:2010.03061v1 fatcat:jfmgkh4ginalzauawlqdbkb6pq
« Previous Showing results 1 — 15 out of 4,060 results