105,908 Hits in 5.1 sec

Towards Automated Evaluation of Explanations in Graph Neural Networks [article]

Vanya BK, Balaji Ganesan, Aniket Saxena, Devbrat Sharma, Arvind Agarwal
2021 arXiv   pre-print
In particular, we do not have well developed methods for automatically evaluating explanations, in ways that are closer to how users consume those explanations.  ...  Explaining Graph Neural Networks predictions to end users of AI applications in easily understandable terms remains an unsolved problem.  ...  Acknowledgements The first author's participation in this work has been made possible by IBM's Global Research Mentorship program.  ... 
arXiv:2106.11864v1 fatcat:afruf66625eanehwistkroaanq

ExplaGraphs: An Explanation Graph Generation Task for Structured Commonsense Reasoning [article]

Swarnadeep Saha, Prateek Yadav, Lisa Bauer, Mohit Bansal
2021 arXiv   pre-print
In this work, we present ExplaGraphs, a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction.  ...  Recent commonsense-reasoning tasks are typically discriminative in nature, where a model answers a multiple-choice question for a certain context.  ...  E.4 Quantitative Analysis of Generated Explanation Graphs from RE-T5 In order to gain a better understanding of the explanation graphs generated by our Reasoning-T5 model, we show sample explanation graphs  ... 
arXiv:2104.07644v3 fatcat:7sgvwwriejepriovs5dkmrtq3i

Improving Commonsense Question Answering by Graph-based Iterative Retrieval over Multiple Knowledge Sources [article]

Qianglong Chen, Feng Ji, Haiqing Chen, Yin Zhang
2020 arXiv   pre-print
In order to facilitate natural language understanding, the key is to engage commonsense or background knowledge.  ...  knowledge from multiple knowledge sources.  ...  Although the Cambridge Dictionary is not a graph by nature, we can regard entries in dictionary as sub-graph, which is composed of explanations, examples and synonym.  ... 
arXiv:2011.02705v1 fatcat:glllz6enhfhhdf2tyq4olrodzi

Inherently Explainable Reinforcement Learning in Natural Language [article]

Xiangyu Peng, Mark O. Riedl, Prithviraj Ammanabrolu
2022 arXiv   pre-print
language.  ...  Our agent is designed to treat explainability as a first-class citizen, using an extracted symbolic knowledge graph-based state representation coupled with a Hierarchical Graph Attention mechanism that  ...  In order to make the explanation more readable for a human reader, we further transform knowledge graph triplets to natural language by template filling.  ... 
arXiv:2112.08907v2 fatcat:4scvsdbhebg55d4hxwvyxwdice

Semantics of the Black-Box: Can knowledge graphs help make deep learning systems more interpretable and explainable? [article]

Manas Gaur, Keyur Faldu, Amit Sheth
2020 arXiv   pre-print
We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches, and illustrate it with examples from natural language processing for healthcare  ...  This article demonstrates how knowledge, provided as a knowledge graph, is incorporated into DL methods using knowledge-infused learning, which is one of the strategies.  ...  More often than not, explanations would be in natural language explaining the decision, while interpretations can be statistical or conceptual (using either generic or domain-specific KG [14] , [9] )  ... 
arXiv:2010.08660v4 fatcat:hcoahll2ivhdpcix7t6ezh425y

A Survey on Explainability in Machine Reading Comprehension [article]

Mokanarangan Thayaparan, Marco Valentino, André Freitas
2020 arXiv   pre-print
In addition, we identify persisting open research questions and highlight critical directions for future work.  ...  This paper presents a systematic review of benchmarks and approaches for explainability in Machine Reading Comprehension (MRC).  ...  Explanation Type (1) Knowledge-based explanation; (2) Operational-based explanation Generated Output Denotes whether the explanation is generated or composed from facts retrieved from the background knowledge  ... 
arXiv:2010.00389v1 fatcat:jzxjysnma5ee5auvplfxxfar2u

Semantics of the Black-Box: Can Knowledge Graphs Help Make Deep Learning Systems More Interpretable and Explainable?

Manas Gaur, Keyur Faldu, Amit Sheth, Amit Sheth
2021 IEEE Internet Computing  
More often than not, explanations would be in natural language explaining the decision, while interpretations can be statistical or conceptual (using either generic or domain-specific KG [14] , [9] )  ...  We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches, and illustrate it with examples from natural language processing for healthcare  ... 
doi:10.1109/mic.2020.3031769 fatcat:p2ivnkyy5zblhm4ztua7bfslbi

Explanation Graph Generation via Pre-trained Language Models: An Empirical Study with Contrastive Learning [article]

Swarnadeep Saha, Prateek Yadav, Mohit Bansal
2022 arXiv   pre-print
Pre-trained sequence-to-sequence language models have led to widespread success in many natural language generation tasks.  ...  In this work, we study pre-trained language models that generate explanation graphs in an end-to-end manner and analyze their ability to learn the structural constraints and semantics of such graphs.  ...  The views in this article are those of the authors and not the funding agency.  ... 
arXiv:2204.04813v1 fatcat:j4mllh74n5bovpoyn2qb4luo4e

WorldTree: A Corpus of Explanation Graphs for Elementary Science Questions supporting Multi-Hop Inference [article]

Peter A. Jansen, Elizabeth Wainwright, Steven Marmorstein, Clayton T. Morrison
2018 arXiv   pre-print
as "explanation graphs" -- sets of lexically overlapping sentences that describe how to arrive at the correct answer to a question through a combination of domain and world knowledge.  ...  In this paper we present a corpus of explanations for standardized science exams, a recent challenge task for question answering.  ...  In a generating a corpus of natural-language explanations for 432 elementary science questions, Jansen et al. (2016) found that the average question requires aggregating 4 separate pieces of knowledge  ... 
arXiv:1802.03052v1 fatcat:sn74to3szvg2nedluexrbibr4a

Querying Enterprise Knowledge Graph With Natural Language

Junyi Chai, Yonggang Deng, Maochen Guan, Yujie He, Bing Li, Rui Yan
2019 International Semantic Web Conference  
correct graph queries and restating the query results in natural language back to users.  ...  (SQG) then takes the outputs from above components to generate queries, then queries knowledge bases and gets results.  ... 
dblp:conf/semweb/ChaiDGHLY19 fatcat:lfrurui2pja4la3iluyymvjb64

Towards Combinational Relation Linking over Knowledge Graphs [article]

Weiguo Zheng, Mei Zhang
2019 arXiv   pre-print
Given a natural language phrase, relation linking aims to find a relation (predicate or property) from the underlying knowledge graph to match the phrase.  ...  In this paper, we focus on the task of combinational relation linking over knowledge graphs.  ...  S ) provides searching mechanisms for linking natural language relations to knowledge graphs.  ... 
arXiv:1910.09879v2 fatcat:ykm6vigc3jbmvnvlxii2uwof7e

Generating Tailored Worked-Out Problem Solutions to Help Students Learn from Examples [chapter]

Giuseppe Carenini, Cristina Conati
2005 Text, Speech and Language Technology  
When presenting a new example, the framework uses natural language generation techniques and a probabilistic student model to tailor the example to the student's domain knowledge.  ...  Filling in solution gaps is part of the meta-cognitive skill known as self-explanation (generate explanations to oneself to clarify an example solution), which is crucial to effectively learn from examples  ...  This problem is novel in ITS research, as it requires sophisticated natural language generation (NLG) techniques.  ... 
doi:10.1007/1-4020-3051-7_8 fatcat:puuqeozieffh3or3tvsiiaythm

Knowledge-based relational search in cultural heritage linked data

Eero Hyvönen, Heikki Rantala
2021 Digital Scholarship in the Humanities  
In this way, (1) semantically uninteresting connections can be ruled out effectively and (2) natural language explanations about the connections can be created for the end-user.  ...  This article presents a new knowledge-based approach for finding interesting semantic relations between resources in a knowledge graph (KG).  ...  natural language explanations for the connections.  ... 
doi:10.1093/llc/fqab042 fatcat:vi2ml7vffrgnjkhbpueu6mpn4q

Explaining Bayesian Networks in Natural Language: State of the Art and Challenges

Conor Hennessy, Alberto Bugarín, Ehud Reiter
2020 Zenodo  
We outline several challenges that remain to be addressed in the generation and validation of natural language explanations of Bayesian Networks.  ...  In this paper we aim to highlight the importance of a natural language approach to explanation and to discuss some of the previous and state of the art attempts of the textual explanation of Bayesian Networks  ...  Acknowledgments This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 860621.  ... 
doi:10.5281/zenodo.5882297 fatcat:jzlyddagorbftnyfolbyia2s4a


2004 International journal on artificial intelligence tools  
HYLITE+ is a natural language generation system that generates adaptive Web pages based on a learner model(LM). The two systems are complementary and have been implemented separately.  ...  Specifically, we focus on two critical issues in intelligent tutoring -student diagnosis and generation of adaptive explanations.  ...  Last but not least, we thank the human evaluators who participated in the evaluative studies of the two systems.  ... 
doi:10.1142/s0218213004001569 fatcat:yqsbftan3ndpbn4z7khnv22rnu
« Previous Showing results 1 — 15 out of 105,908 results