Filters








111 Hits in 2.5 sec

SciTaiL: A Textual Entailment Dataset from Science Question Answering

Tushar Khot, Ashish Sabharwal, Peter Clark
2018 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
We present a new dataset and model for textual entailment, derived from treating multiple-choice question-answering as an entailment problem.  ...  Different from existing entailment datasets, we create hypotheses from science questions and the corresponding answer candidates, and premises from relevant web sentences retrieved from a large corpus.  ...  new natural dataset for textual entailment, SC-ITAIL, derived directly from an end task, namely that of Science question answering.  ... 
doi:10.1609/aaai.v32i1.12022 fatcat:euqrudwzj5gedphidbnfgj444i

Bridging Knowledge Gaps in Neural Entailment via Symbolic Models

Dongyeop Kang, Tushar Khot, Ashish Sabharwal, Peter Clark
2018 Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing  
On the SciTail dataset, NSnet outperforms a simpler combination of the two predictions by 3% and the base entailment model by 5%.  ...  We focus on filling these knowledge gaps in the Science Entailment task, by leveraging an external structured knowledge base (KB) of science facts.  ...  Introduction Textual entailment, a key challenge in natural language understanding, is a sub-problem in many end tasks such as question answering and information extraction.  ... 
doi:10.18653/v1/d18-1535 dblp:conf/emnlp/KangKSC18 fatcat:bfc3of2epberxltpykqzw2sjme

Improving Natural Language Inference Using External Knowledge in the Science Questions Domain [article]

Xiaoyan Wang, Pavan Kapanipathi, Ryan Musa, Mo Yu, Kartik Talamadupula, Ibrahim Abdelaziz, Maria Chang, Achille Fokoue, Bassem Makni, Nicholas Mattei, Michael Witbrock
2018 arXiv   pre-print
Our model achieves the new state-of-the-art performance on the NLI problem over the SciTail science questions dataset.  ...  Present approaches to the problem largely focus on learning-based methods that use only textual information in order to classify whether a given premise entails, contradicts, or is neutral with respect  ...  Datasets We use the SciTail dataset (Khot, Sabharwal, and , which is a textual entailment dataset derived from publicly released science domain multiple choice question answering datasets (Welbl, Liu,  ... 
arXiv:1809.05724v2 fatcat:7dolmjp3rvgxljilc6qanqsgvy

Improving Natural Language Inference Using External Knowledge in the Science Questions Domain

Xiaoyan Wang, Pavan Kapanipathi, Ryan Musa, Mo Yu, Kartik Talamadupula, Ibrahim Abdelaziz, Maria Chang, Achille Fokoue, Bassem Makni, Nicholas Mattei, Michael Witbrock
2019 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Our model achieves close to state-of-the-art performance for NLI on the SciTail science questions dataset.  ...  Present approaches to the problem largely focus on learning-based methods that use only textual information in order to classify whether a given premise entails, contradicts, or is neutral with respect  ...  Datasets We use the SciTail dataset , which is a textual entailment dataset derived from publicly released science domain multiple choice question answering datasets (Welbl, Liu, and Gardner 2017; Clark  ... 
doi:10.1609/aaai.v33i01.33017208 fatcat:chb6qvwgrzav5f5x2olzavo5bi

Bridging Knowledge Gaps in Neural Entailment via Symbolic Models [article]

Dongyeop Kang, Tushar Khot, Ashish Sabharwal, Peter Clark
2018 arXiv   pre-print
On the SciTail dataset, NSnet outperforms a simpler combination of the two predictions by 3% and the base entailment model by 5%.  ...  We focus on filling these knowledge gaps in the Science Entailment task, by leveraging an external structured knowledge base (KB) of science facts.  ...  Introduction Textual entailment, a key challenge in natural language understanding, is a sub-problem in many end tasks such as question answering and information extraction.  ... 
arXiv:1808.09333v2 fatcat:vonemt4zffbq3a7sycugheezga

Multi-Task Deep Neural Networks for Natural Language Understanding [article]

Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao
2019 arXiv   pre-print
We also demonstrate using the SNLI and SciTail datasets that the representations learned by MT-DNN allow domain adaptation with substantially fewer in-domain labels than the pre-trained BERT representations  ...  MT-DNN not only leverages large amounts of cross-task data, but also benefits from a regularization effect that leads to more general representations in order to adapt to new tasks and domains.  ...  Acknowledgements We would like to thanks Jade Huang from Microsoft for her generous help on this work.  ... 
arXiv:1901.11504v2 fatcat:rrr7tqnfzbb43nhw5ulhpmd6im

AWE: Asymmetric Word Embedding for Textual Entailment [article]

Tengfei Ma, Chiamin Wu, Cao Xiao, Jimeng Sun
2018 arXiv   pre-print
Experimental results on SciTail and SNLI datasets show that the learned asymmetric word embeddings could significantly improve the word-word interaction based textual entailment models.  ...  Different from paraphrase identification or sentence similarity evaluation, textual entailment is essentially determining a directional (asymmetric) relation between the premise and the hypothesis.  ...  We derive a new type of asymmetric word embedding from the entailment word pairs.  ... 
arXiv:1809.04047v2 fatcat:eit3zmy7wvd7bca7l56y5drtc4

Universal Natural Language Processing with Limited Annotations: Try Few-shot Textual Entailment as a Start [article]

Wenpeng Yin, Nazneen Fatema Rajani, Dragomir Radev, Richard Socher, Caiming Xiong
2020 arXiv   pre-print
However, current research of textual entailment has not spilled much ink on the following questions: (i) How well does a pretrained textual entailment system generalize across domains with only a handful  ...  NLP tasks such as question answering and coreference resolution when the end-task annotations are limited.  ...  Examples in GLUE-RTE mainly come from the news and Wikipedia domains. SciTail is from the science domain, designed from the end task of multiple-choice QA.  ... 
arXiv:2010.02584v1 fatcat:uyhaox2yljaj5bxwnfubmbbxkq

Improving Natural Language Inference with a Pretrained Parser [article]

Deric Pang, Lucy H. Lin, Noah A. Smith
2019 arXiv   pre-print
We introduce a novel approach to incorporate syntax into natural language inference (NLI) models. Our method uses contextual token-level vector representations from a pretrained dependency parser.  ...  SciTail: A textual entailment dataset from science question answering. In Proc. of AAAI.  ...  , and answers to questions.  ... 
arXiv:1909.08217v1 fatcat:2iicjhzwzjg47o7kd5ciqvsvyq

DocNLI: A Large-scale Dataset for Document-level Natural Language Inference [article]

Wenpeng Yin, Dragomir Radev, Caiming Xiong
2021 arXiv   pre-print
Natural language inference (NLI) is formulated as a unified framework for solving various NLP problems such as relation extraction, question answering, summarization, etc.  ...  This work presents DocNLI -- a newly-constructed large-scale dataset for document-level NLI. DocNLI is transformed from a broad range of NLP problems and covers multiple genres of text.  ...  SciTail (Khot et al., 2018) is also derived from the end QA task of answering multiple-choice school-level science questions.  ... 
arXiv:2106.09449v1 fatcat:qtehyxfcsffqjnu2smjlhha4dm

Answering Science Exam Questions Using Query Rewriting with Background Knowledge [article]

Ryan Musa, Xiaoyan Wang, Achille Fokoue, Nicholas Mattei, Maria Chang, Pavan Kapanipathi, Bassem Makni, Kartik Talamadupula, Michael Witbrock
2019 arXiv   pre-print
Our rewriter is able to incorporate background knowledge from ConceptNet and -- in tandem with a generic textual entailment system trained on SciTail that identifies support in the retrieved results --  ...  We present a system that rewrites a given question into queries that are used to retrieve supporting text from a large corpus of science-related text.  ...  We use match-LSTM (Wang and Jiang, 2016a,b) trained on SciTail as our textual entailment model.  ... 
arXiv:1809.05726v2 fatcat:27muyx5t7vcexh5gyrwinsibre

ASR-GLUE: A New Multi-task Benchmark for ASR-Robust Natural Language Understanding [article]

Lingyun Feng, Jianwei Yu, Deng Cai, Songxiang Liu, Haitao Zheng, Yan Wang
2022 arXiv   pre-print
%To facilitate the research on ASR-robust general language understanding, In this paper, we propose ASR-GLUE benchmark, a new collection of 6 different NLU tasks for evaluating the performance of models  ...  Extensive experimental results and analysises show that the proposed methods are effective to some extent, but still far from human performance, demonstrating that NLU under ASR error is still very challenging  ...  SciTail SciTail [19] is a recently released challenging textual entailment dataset collected from the science domain.  ... 
arXiv:2108.13048v2 fatcat:xdk462xhmrchvpnv6cvhsrw454

Stochastic Answer Networks for Natural Language Inference [article]

Xiaodong Liu, Kevin Duh, Jianfeng Gao
2019 arXiv   pre-print
Quora Question Pairs dataset.  ...  We propose a stochastic answer network (SAN) to explore multi-step inference strategies in Natural Language Inference.  ...  Our state-of-theart results on four benchmarks (SNLI, MultiNLI, SciTail, Quora Question Pairs) show the effectiveness of this multi-step inference architecture.  ... 
arXiv:1804.07888v2 fatcat:ygxlvfrqxfhrdikiuuun3pfnwq

Targeted Adversarial Training for Natural Language Understanding [article]

Lis Pereira, Xiaodong Liu, Hao Cheng, Hoifung Poon, Jianfeng Gao, Ichiro Kobayashi
2021 arXiv   pre-print
We present a simple yet effective Targeted Adversarial Training (TAT) algorithm to improve adversarial training for natural language understanding.  ...  SciTail: A textual entailment dataset from science question answering. In AAAI. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.  ...  The pascal recognising textual entailment challenge.  ... 
arXiv:2104.05847v1 fatcat:qhvjy2pk5vaqlls4uim7trsd5u

Natural Language Processing Applications: A New Taxonomy using Textual Entailment

Manar Elshazly, Mohammed Haggag, Soha Ahmed Ehssan
2021 International Journal of Advanced Computer Science and Applications  
Text entailment is more precise than traditional Natural Language Processing techniques in extracting emotions from text because the sentiment of any text can be clarified by textual entailment.  ...  For this purpose, when combining a textual entailment with deep learning, they can hugely showed an improvement in performance accuracy and aid in new applications such as depression detection.  ...  The model new data, SCITAIL, is designed for the final task of answering high school science questions.  ... 
doi:10.14569/ijacsa.2021.0120580 fatcat:yp7wtq6n7bamjcsnpl2hathrwy
« Previous Showing results 1 — 15 out of 111 results