Filters








1,191,782 Hits in 4.4 sec

Local Interpretations for Explainable Natural Language Processing: A Survey [article]

Siwen Luo and Hamish Ivison and Caren Han and Josiah Poon
2021 arXiv   pre-print
This work investigates various methods to improve the interpretability of deep neural networks for natural language processing (NLP) tasks, including machine translation and sentiment analysis.  ...  ; 2) explaining through natural language explanation; 3) probing the hidden states of models and word representations.  ...  In this survey paper, we focus on interpretable methods proposed for natural language processing tasks.  ... 
arXiv:2103.11072v1 fatcat:7453vleiqfd73fde7gp3222mtm

Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing [article]

Sarah Wiegreffe, Ana Marasović
2021 arXiv   pre-print
Explainable NLP (ExNLP) has increasingly focused on collecting human-annotated textual explanations.  ...  These explanations are used downstream in three ways: as data augmentation to improve performance on a predictive task, as supervision to train models to produce explanations for their predictions, and  ...  Acknowledgements We are grateful to Yejin Choi, Peter Clark, Gabriel Ilharco, Alon Jacovi, Daniel Khashabi, Mark Riedl, Alexis Ross, and Noah Smith for valuable feedback.  ... 
arXiv:2102.12060v4 fatcat:ufmg2rdexbbhlnqborhhsivy6u

A Survey of the State of Explainable AI for Natural Language Processing [article]

Marina Danilevsky, Kun Qian, Ranit Aharonov, Yannis Katsis, Ban Kawas, Prithviraj Sen
2020 arXiv   pre-print
This survey presents an overview of the current state of Explainable AI (XAI), considered within the domain of Natural Language Processing (NLP).  ...  We detail the operations and explainability techniques currently available for generating explanations for NLP model predictions, to serve as a resource for model developers in the community.  ...  Introduction Traditionally, Natural Language Processing (NLP) systems have been mostly based on techniques that are inherently explainable.  ... 
arXiv:2010.00711v1 fatcat:7si7hkcknzchbb7gdujew5sbiq

You May Like This Hotel Because ...: Identifying Evidence for Explainable Recommendations

Shin Kanouchi
2021 Journal of Natural Language Processing  
doi:10.5715/jnlp.28.264 fatcat:plsblkp6z5b6ziycpqbx7kozzq

TellMeWhy: Learning to Explain Corrective Feedback for Second Language Learners

Yi-Huei Lai, Jason Chang
2019 Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations  
as Second Language (TESOL).  ...  each collocates Figure 4 : Outline of the Smadja's process words using the EF-Cambridge Open Language Database EFCAMDAT (Geertzen et al. (2013) and Huang et al. (2018) ).  ... 
doi:10.18653/v1/d19-3040 dblp:conf/emnlp/LaiC19 fatcat:zpfuqkb57zhedmrqy64jnbryja

AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models

Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subramanian, Matt Gardner, Sameer Singh
2019 Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations  
language modeling using BERT and reading comprehension using BiDAF).  ...  We introduce Al-lenNLP Interpret, a flexible framework for interpreting NLP models.  ...  Acknowledgements The authors thank Shi Feng, the members of UCI NLP, and the anonymous reviewers for their valuable feedback.  ... 
doi:10.18653/v1/d19-3002 dblp:conf/emnlp/WallaceTWSGS19 fatcat:jprdepaw6ra25fq4g4wxwwpxga

Explaining the Stars: Weighted Multiple-Instance Learning for Aspect-Based Sentiment Analysis

Nikolaos Pappas, Andrei Popescu-Belis
2014 Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)  
Next, the model is used to predict aspect ratings in previously unseen texts, demonstrating interpretability and explanatory power for its predictions.  ...  For learning from texts with known aspect ratings, the model performs multipleinstance regression (MIR) and assigns importance weights to each of the sentences or paragraphs of a text, uncovering their  ...  How did we get to the (...)" beautiful "The beauty of the nature.  ... 
doi:10.3115/v1/d14-1052 dblp:conf/emnlp/PappasP14 fatcat:rza6hiqkobdczeb64yvlszsuci

A causal framework for explaining the predictions of black-box sequence-to-sequence models

David Alvarez-Melis, Tommi Jaakkola
2017 Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing  
Acknowledgments We thank the anonymous reviewers for their helpful suggestions regarding presentation and additional experiments, and Dr. Chantal Melis for valuable feedback.  ...  This is particularly true for structured prediction methods at the core of many natural language processing tasks such as machine translation (MT).  ...  Bias detection in parallel corpora Natural language processing methods that derive semantics from large corpora have been shown to incorporate biases present in the data, such as archaic stereotypes of  ... 
doi:10.18653/v1/d17-1042 dblp:conf/emnlp/Alvarez-MelisJ17 fatcat:j3yi5abponhpzn6shjnmdknf5a

Detecting and Explaining Causes From Text For a Time Series Event

Dongyeop Kang, Varun Gangal, Ang Lu, Zheng Chen, Eduard Hovy
2017 Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing  
Explaining underlying causes or effects about events is a challenging but valuable task.  ...  Their scoring Table 8 : Example causal chains for explaining the rise (↑) and fall (↓) of companies' stock price.  ...  Conclusion This paper defines the novel task of detecting and explaining causes from text for a time series. First, we detect causal features from online text.  ... 
doi:10.18653/v1/d17-1292 dblp:conf/emnlp/KangGLCH17 fatcat:ucdbymjj3bezrmlaz7ydhqdore

HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering

Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, Christopher D. Manning
2018 Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing  
We show that HOTPOTQA is challenging for the latest QA systems, and the supporting facts enable models to improve performance and make explainable predictions. * These authors contributed equally.  ...  Existing question answering (QA) datasets fail to train QA systems to perform complex reasoning and provide explanations for answers.  ...  This makes it difficult for models to learn about the underlying reasoning process, as well as to make explainable predictions.  ... 
doi:10.18653/v1/d18-1259 dblp:conf/emnlp/Yang0ZBCSM18 fatcat:vvg2p6aasbh2tckluiygpxckey

Explaining Character-Aware Neural Networks for Word-Level Prediction: Do They Discover Linguistic Rules?

Fréderic Godin, Kris Demuynck, Joni Dambre, Wesley De Neve, Thomas Demeester
2018 Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing  
Character-level features are currently used in different neural network-based natural language processing algorithms. However, little is known about the character-level patterns those models learn.  ...  We evaluate and compare these models for the task of morphological tagging on three morphologically different languages and show that these models implicitly discover understandable linguistic rules. 1  ...  FG would like to thank Kim Bettens for helping out with the statistical analysis.  ... 
doi:10.18653/v1/d18-1365 dblp:conf/emnlp/GodinDDND18 fatcat:vpwfxbhq3vh3fpfbkrg4bci42e

Measuring Beginner Friendliness of Japanese Web Pages explaining Academic Concepts by Integrating Neural Image Feature and Text Features

Hayato Shiokawa, Kota Kawaguchi, Bingcai Han, Takehito Utsuro, Yasuhide Kawada, Masaharu Yoshioka, Noriko Kando
2018 Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications  
In order to improve the efficiency of using search engine for academic study, it is necessary to invent a technique of measuring the beginner friendliness of a Web page explaining academic concepts and  ...  of foreign language words into Japanese and the writing of loan words, for emphasis, for onomatopoeia, for technical and scientific terms, and for names of plants, animals, minerals, and often Japanese  ...  Furthermore, parameters of CNN pre-trained using a large scale general purpose data set of images (e.g. natural images) have been proved to be quite useful for extracting universal features that can be  ... 
doi:10.18653/v1/w18-3721 dblp:conf/acl-tea/ShiokawaKHUKYK18 fatcat:nra2mszxafcjpm36cxzmszvnky

An Operation Sequence Model for Explainable Neural Machine Translation

Felix Stahlberg, Danielle Saunders, Bill Byrne
2018 Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP  
We propose to achieve explainable neural machine translation (NMT) by changing the output representation to explain itself.  ...  In contrast to many modern neural models, our system emits explicit word alignment information which is often crucial to practical machine translation as it improves explainability.  ...  of natural language processing (Karpathy et al., 2015; Alvarez-Melis and Jaakkola, 2017; Ding et al., 2017; Feng et al., 2018) .  ... 
doi:10.18653/v1/w18-5420 dblp:conf/emnlp/StahlbergSB18 fatcat:dwkt3parvfdljgdp7ewwv7aovm

Explaining non-linear Classifier Decisions within Kernel-based Deep Architectures

Danilo Croce, Daniele Rossini, Roberto Basili
2018 Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP  
However epistemologically transparent decisions are not provided as for the limited interpretability of the underlying acquired neural models.  ...  Introduction Nonlinear methods such as deep neural networks achieve state-of-the-art performances in several challenging problems, such as image classification or natural language processing (NLP).  ...  In this work, we propose a model which allows to provide explanations that are easily interpretable even by non-expert users, as they are expressed in natural language and are hence a more natural solution  ... 
doi:10.18653/v1/w18-5403 dblp:conf/emnlp/CroceRB18 fatcat:g6bo4befeveqveaqccvdbnylny

Learning to Explain Entity Relationships in Knowledge Graphs

Nikos Voskarides, Edgar Meij, Manos Tsagkias, Maarten de Rijke, Wouter Weerkamp
2015 Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)  
doi:10.3115/v1/p15-1055 dblp:conf/acl/VoskaridesMTRW15 fatcat:o6ezw3il4fbq5g2adbvl3ps4ly
« Previous Showing results 1 — 15 out of 1,191,782 results