Filters








160,376 Hits in 4.9 sec

Training Classifiers with Natural Language Explanations [article]

Braden Hancock, Paroma Varma, Stephanie Wang, Martin Bringmann, Percy Liang, Christopher Ré
2018 arXiv   pre-print
In this work, we propose BabbleLabble, a framework for training classifiers in which an annotator provides a natural language explanation for each labeling decision.  ...  On three relation extraction tasks, we find that users are able to train classifiers with comparable F1 scores from 5-100× faster by providing explanations instead of just labels.  ...  Acknowledgments We gratefully acknowledge the support of the following organizations: We thank Alex Ratner for his assistance with data programming, Jason Fries and the many members of the Hazy Research  ... 
arXiv:1805.03818v4 fatcat:cde6tf6ifbcmdfmjbajytkrgau

Training Classifiers with Natural Language Explanations

Braden Hancock, Martin Bringmann, Paroma Varma, Percy Liang, Stephanie Wang, Christopher Ré
2018 Association for Computational Linguistics (ACL). Annual Meeting Conference Proceedings  
In this work, we propose BabbleLabble, a framework for training classifiers in which an annotator provides a natural language explanation for each labeling decision.  ...  On three relation extraction tasks, we find that users are able to train classifiers with comparable F1 scores from 5-100× faster by providing explanations instead of just labels.  ...  Acknowledgments We gratefully acknowledge the support of the following organizations: We thank Alex Ratner for his assistance with data programming, Jason Fries and the many members of the Hazy Research  ... 
pmid:31130772 pmcid:PMC6534135 fatcat:6ele4k77f5c4biny7rkynqtudm

Training Classifiers with Natural Language Explanations

Braden Hancock, Paroma Varma, Stephanie Wang, Martin Bringmann, Percy Liang, Christopher Ré
2018 Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)  
In this work, we propose BabbleLabble, a framework for training classifiers in which an annotator provides a natural language explanation for each labeling decision.  ...  On three relation extraction tasks, we find that users are able to train classifiers with comparable F1 scores from 5-100 faster by providing explanations instead of just labels.  ...  FA8750-13-2-0039 (DEFT), DOE under No. 108845, NIH under We thank Alex Ratner and the developers of Snorkel for their assistance with data programming, as well as the many members of the Hazy Research  ... 
doi:10.18653/v1/p18-1175 dblp:conf/acl/LiangRHVWB18 fatcat:tewy3eyv2vb2zc64rgsbmrt2ri

ALICE: Active Learning with Contrastive Natural Language Explanations [article]

Weixin Liang, James Zou, Zhou Yu
2020 arXiv   pre-print
We propose Active Learning with Contrastive Explanations (ALICE), an expert-in-the-loop training framework that utilizes contrastive natural language explanations to improve data efficiency in learning  ...  We found by incorporating contrastive explanations, our models outperform baseline models that are trained with 40-100% more training data.  ...  Related Work Learning with Natural Language Explanation Psychologists and philosophers have long posited natural language explanations as central organizing elements to human learning and reasoning (Chin-Parker  ... 
arXiv:2009.10259v1 fatcat:u4d44vcv75enjgb46b3nipbphe

Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Few-shot NLI [article]

Yangqiaoyu Zhou, Chenhao Tan
2021 arXiv   pre-print
We leverage the templates in the HANS dataset and construct templated natural language explanations for each template.  ...  In this work, we formulate a few-shot learning setup and examine the effects of natural language explanations on OOD generalization.  ...  We thank Tom Mccoy, one author for the HANS paper, for a detailed explanation on their data when we reached out.  ... 
arXiv:2110.06223v1 fatcat:4tdw3qj2dnhq5lh4mueromkepm

ExpBERT: Representation Engineering with Natural Language Explanations [article]

Shikhar Murty, Pang Wei Koh, Percy Liang
2020 arXiv   pre-print
In this paper, we allow model developers to specify these types of inductive biases as natural language explanations.  ...  We use BERT fine-tuned on MultiNLI to "interpret" these explanations with respect to the input sentence, producing explanation-guided representations of the input.  ...  We also thank Yuhao Zhang for assistance with TACRED experiments. PWK was supported by the Facebook Fellowship Program.  ... 
arXiv:2005.01932v1 fatcat:7cijuy4tpraqpb3v3adwmiyc6a

Knowledge-Guided Sentiment Analysis via Learning From Natural Language Explanations

Zunwang Ke, Jiabao Sheng, Zhe Li, Wushour Silamu, Qinglang Guo
2021 IEEE Access  
Attempts have been made to train classifiers with NL explanations.  ...  Natural Language Explanations In leveraging NL to train the classifiers, supervision in the form of NL has been explored by many works.  ...  QINGLANG GUO is currently a PhD student in engineering with University of Science and Technology of China, and he also works as an engineer in China Academy of Electronics and Information Technology, National  ... 
doi:10.1109/access.2020.3048088 fatcat:z6cmaftlu5e4be2sndsvjbjnga

You Can Do Better! If You Elaborate the Reason When Making Prediction [article]

Dongfang Li, Jingcong Tao, Qingcai Chen, Baotian Hu
2021 arXiv   pre-print
The experimental results show that the proposed approach can generate reasonable explanations for its predictions even with a small-scale training corpus.  ...  Neural predictive models have achieved remarkable performance improvements in various natural language processing tasks.  ...  [36] leverage the text-to-text framework [40] to train language models to output natural language explanations along with their prediction.  ... 
arXiv:2103.14919v2 fatcat:aabu2l3t2bewhgvi4njwesa2yy

Zero-shot Learning of Classifiers from Natural Language Quantification

Shashank Srivastava, Igor Labutov, Tom Mitchell
2018 Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)  
Experiments on three domains show that the learned classifiers outperform previous approaches for learning with limited data, and are comparable with fully supervised classifiers trained from a small number  ...  Humans can efficiently learn new concepts using language.  ...  Natural language explanations on how to classify concept examples are parsed into formal constraints relating features to concept labels.  ... 
doi:10.18653/v1/p18-1029 dblp:conf/acl/MitchellSL18 fatcat:fuhmnmaqzbghnjdat3w4ofz3jm

e-SNLI: Natural Language Inference with Natural Language Explanations [article]

Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, Phil Blunsom
2018 arXiv   pre-print
In this work, we extend the Stanford Natural Language Inference dataset with an additional layer of human-annotated natural language explanations of the entailment relations.  ...  Our dataset thus opens up a range of research directions for using natural language explanations, both for improving models and for asserting their trust.  ...  We argue for free-form natural language explanations, as opposed to formal language, for a series of reasons.  ... 
arXiv:1812.01193v2 fatcat:leshzirbs5d7rjmuhliimonzvq

A text-mining approach to explain unwanted behaviours

Wei Chen, David Aspinall, Andrew D. Gordon, Charles Sutton, Igor Muttik
2016 Proceedings of the 9th European Workshop on System Security - EuroSec '16  
Our approach combines machine learning and text mining techniques to produce explanations in natural language.  ...  As far as we know, this is the first attempt to generate explanations in natural language by mining the reports written by human malware analysts, resulting in a scalable and entirely data-driven method  ...  CONCLUSION AND FURTHER WORK We have presented a new text-mining approach to generate natural language explanations of unwanted behaviours of Android apps.  ... 
doi:10.1145/2905760.2905763 dblp:conf/eurosec/ChenAGSM16 fatcat:lmksifo2yvfppn3dw3ub4mncku

Explain Yourself! Leveraging Language Models for Commonsense Reasoning

Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, Richard Socher
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics  
We use CoS-E to train language models to automatically generate explanations that can be used during training and inference in a novel Commonsense Auto-Generated Explanation (CAGE) framework.  ...  We collect human explanations for commonsense reasoning in the form of natural language sequences and highlighted annotations in a new dataset called Common Sense Explanations (CoS-E).  ...  Humangenerated natural language explanations for classification data have been used in the past to train a semantic parser that in turn generates more noisy labeled data which can used to train a classifier  ... 
doi:10.18653/v1/p19-1487 dblp:conf/acl/RajaniMXS19 fatcat:7ezmngl5vfektlfqwm3ybuf7eu

InterpNET: Neural Introspection for Interpretable Deep Learning [article]

Shane Barratt
2017 arXiv   pre-print
This paper proposes a neural network design paradigm, termed InterpNET, which can be combined with any existing classification architecture to generate natural language explanations of the classifications  ...  Experiments on a CUB bird classification and explanation dataset show qualitatively and quantitatively that the model is able to generate high-quality explanations.  ...  Conclusion This paper introduces a general neural network module which can be combined with any existing classification architecture to generate natural language explanations of the network's classifications  ... 
arXiv:1710.09511v2 fatcat:ox6egtammbekrove7z74yzeqtu

Generating Counterfactual Explanations with Natural Language [article]

Lisa Anne Hendricks, Ronghang Hu, Trevor Darrell, Zeynep Akata
2018 arXiv   pre-print
Natural language explanations of deep neural network decisions provide an intuitive way for a AI agent to articulate a reasoning process.  ...  We call such textual explanations counterfactual explanations, and propose an intuitive method to generate counterfactual explanations by inspecting which evidence in an input is missing, but might contribute  ...  Introduction Natural language is an intuitive and efficient way to communicate with AI agents about complex data, such as images.  ... 
arXiv:1806.09809v1 fatcat:alq2k4smefdixm3z476ytrsotm

ESPRIT: Explaining Solutions to Physical Reasoning Tasks [article]

Nazneen Fatema Rajani, Rui Zhang, Yi Chern Tan, Stephan Zheng, Jeremy Weiss, Aadit Vyas, Abhijit Gupta, Caiming XIong, Richard Socher, Dragomir Radev
2020 arXiv   pre-print
We propose ESPRIT, a framework for commonsense reasoning about qualitative physics in natural language that generates interpretable descriptions of physical events.  ...  We use a two-step approach of first identifying the pivotal physical events in an environment and then generating natural language descriptions of those events using a data-to-text approach.  ...  Natural language explanations for sequences of pivotal events.  ... 
arXiv:2005.00730v2 fatcat:owaben5c7fe67exmfoaw4bqu5a
« Previous Showing results 1 — 15 out of 160,376 results