A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
How to choose an Explainability Method? Towards a Methodical Implementation of XAI in Practice
[article]
2021
arXiv
pre-print
Explainability is becoming an important requirement for organizations that make use of automated decision-making due to regulatory initiatives and a shift in public awareness. ...
Therefore, organizations that want or need to provide this explainability are confronted with the selection of an appropriate method for their use case. ...
Ongoing and Future Research In this paper, we discussed our on-going work of designing a methodology to help XAI stakeholders choosing an explainability method. ...
arXiv:2107.04427v1
fatcat:kfc3w35l7naljc7msqp2tn5ceq
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
[article]
2022
arXiv
pre-print
With the amount of XAI methods vastly growing, a taxonomy of methods is needed by researchers as well as practitioners: To grasp the breadth of the topic, compare methods, and to select the right XAI method ...
This paper unifies these efforts and provides a complete taxonomy of XAI methods with respect to notions present in the current state of research. ...
We would like to thank the consortium for the successful cooperation. ...
arXiv:2105.07190v3
fatcat:zy7vl6o4gzcbrpqkrxeyazyeuq
A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems
[article]
2020
arXiv
pre-print
Aiming to support diverse design goals and evaluation methods in XAI research, after a thorough review of XAI related papers in the fields of machine learning, visualization, and human-computer interaction ...
Further, we provide summarized ready-to-use tables of evaluation methods and recommendations for different goals in XAI research. ...
The work in this paper is supported by the DARPA XAI program under N66001-17-2-4031 and by NSF award 1900767. ...
arXiv:1811.11839v5
fatcat:pl4mmtd2zzhipilebnc2khagu4
To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
2021
PeerJ Computer Science
The purpose of LEAF is to provide a reference for end users to evaluate explanations in a standardised and unbiased way, and to guide researchers towards developing improved explainable techniques. ...
The existing literature lists many desirable properties for explanations to be useful, but there is a scarce consensus on how to quantitatively evaluate explanations in practice. ...
promise in practice, due to implementative details. ...
doi:10.7717/peerj-cs.479
pmid:33977131
pmcid:PMC8056245
fatcat:ltbymvindjc3doo2g77uugla3y
Review of Multi-Criteria Decision-Making Methods in Finance Using Explainable Artificial Intelligence
2022
Frontiers in Artificial Intelligence
Explainability is one of the main obstacles that AI faces today on the way to more practical implementation. ...
This article presents a review and classification of multi-criteria decision-making methods that help to achieve the goal of forthcoming research: to create artificial intelligence-based methods that are ...
There have not been many studies devoted to the application of XAI methods in a financial context. ...
doi:10.3389/frai.2022.827584
pmid:35360662
pmcid:PMC8961419
fatcat:bwgn5c7fr5axdbi77cgo2bous4
Training Characteristic Functions with Reinforcement Learning: XAI-methods play Connect Four
[article]
2022
arXiv
pre-print
One of the goals of Explainable AI (XAI) is to determine which input components were relevant for a classifier decision. This is commonly know as saliency attribution. ...
We propose a setup to directly train characteristic functions in the form of neural networks to play simple two-player games. ...
Acknowledgements This research was partially supported by the DFG Cluster of Excellence MATH+ (EXC-2046/1, project id 390685689) and the Research Campus Modal funded by the German Federal Ministry of Education ...
arXiv:2202.11797v2
fatcat:gv6z7ffscrdyhaqjz2cyvy5fli
Classification of Explainable Artificial Intelligence Methods through Their Output Formats
2021
Machine Learning and Knowledge Extraction
Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences. ...
XAI method. ...
Conflicts of Interest: The authors declare no conflict of interest. ...
doi:10.3390/make3030032
fatcat:2mf4wusxdnanthfdayt7iejgl4
Evaluating Explainable Methods for Predictive Process Analytics: A Functionally-Grounded Approach
[article]
2020
arXiv
pre-print
The research contributes to understanding the trustworthiness of explainable methods for predictive process analytics as a fundamental and key step towards human user-oriented evaluation. ...
However, it is unclear how fit for purpose these methods are in explaining process predictive models. ...
Assessing the latter, essential aspect of an explainable method is referred to as a functionally-grounded evaluation in XAI [1] . ...
arXiv:2012.04218v1
fatcat:q7vsu6kl7zhpngw3foov67dble
From Philosophy to Interfaces: an Explanatory Method and a Tool Inspired by Achinstein's Theory of Explanation
[article]
2021
arXiv
pre-print
We propose a new method for explanations in Artificial Intelligence (AI) and a tool to test its expressive power within a user interface. ...
Among the mainstream philosophical theories of explanation we identified one that in our view is more easily applicable as a practical model for user-centric tools: Achinstein's Theory of Explanation. ...
discipline to reduce the distance between individuals, society, and machines: eXplainable AI (XAI).Governments have also started to act towards the establishment of ground rules of behaviour from complex ...
arXiv:2109.04171v1
fatcat:cbjeokmpmfdfjdkdaxzbvnwl5i
Machine Learning Interpretability: A Survey on Methods and Metrics
2019
Electronics
However, the emergence of these methods shows there is no consensus on how to assess the explanation quality. Which are the most suitable metrics to assess the quality of an explanation? ...
Furthermore, a complete literature review is presented in order to identify future directions of work on this field. ...
offers guidance on each requirement's practical implementation. ...
doi:10.3390/electronics8080832
fatcat:3mcv7lccwrbj5hakti2iwvdtu4
Toward Interpretable Machine Learning: Transparent Deep Neural Networks and Beyond
[article]
2020
arXiv
pre-print
usage of explainable AI in a representative selection of application scenarios. ...
In this work we aim to (1) provide a timely overview of this active emerging field and explain its theoretical foundations, (2) put interpretability algorithms to a test both from a theory and comparative ...
In practice, it is important to reach an objective assessment of how good an explanation is. ...
arXiv:2003.07631v1
fatcat:pvjjzqns2bdtxlvganye4yipey
Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review
2021
Applied Sciences
We performed a systematic literature review of work to-date in the application of XAI in CDSS. ...
However, we found an overall distinct lack of application of XAI in the context of CDSS and, in particular, a lack of user studies exploring the needs of clinicians. ...
Their findings are an important first step towards XAI in classification of diseased tissue. ...
doi:10.3390/app11115088
fatcat:rtawp4nsunh7zjr66atz7x63r4
Decision Theory Meets Explainable AI
[chapter]
2020
Lecture Notes in Computer Science
Explainability has been a core research topic in AI for decades and therefore it is surprising that the current concept of Explainable AI (XAI) seems to have been launched as late as 2016. ...
This is a problem with current XAI research because it tends to ignore existing knowledge and wisdom gathered over decades or even centuries by other relevant domains. ...
-It it difficult to choose what MCDM method to use and what that choice means in practice regarding results and explainability. -The choice of MCDM model and parameters remains subjective. ...
doi:10.1007/978-3-030-51924-7_4
fatcat:vp3wlgiiljaypcehauyi3cr5wy
Reviewing the Need for Explainable Artificial Intelligence (xAI)
[article]
2021
arXiv
pre-print
Yet, we have a limited understanding of how xAI research addresses the need for explainable AI. ...
We conduct a systematic review of xAI literature on the topic and identify four thematic debates central to how xAI addresses the black-box problem. ...
Considering the application of xAI in AI based systems, the four thematic debates indicate that organizations need to be cautious when choosing to use an xAI method to address a specific need. ...
arXiv:2012.01007v2
fatcat:s5r2d2ovgfdy5oeyedb3md45oe
explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning
2019
IEEE Transactions on Visualization and Computer Graphics
We propose a framework for interactive and explainable machine learning that enables users to (1) understand machine learning models; (2) diagnose model limitations using different explainable AI methods ...
To operationalize the framework, we present explAIner, a visual analytics system for interactive and explainable machine learning that instantiates all phases of the suggested pipeline within the commonly ...
We also classify a selection of relevant XAI methods according to their properties, as shown in Table 1. These XAI methods are available as different explainers in our system implementation. ...
doi:10.1109/tvcg.2019.2934629
pmid:31442998
fatcat:ganc4ulfcvh7nabhts4mateuwq
« Previous
Showing results 1 — 15 out of 918 results