Filters








1,215 Hits in 2.7 sec

Relation-Based Counterfactual Explanations for Bayesian Network Classifiers

Emanuele Albini, Antonio Rago, Pietro Baroni, Francesca Toni
2020 Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence  
We propose a general method for generating counterfactual explanations (CFXs) for a range of Bayesian Network Classifiers (BCs), e.g. single- or multi-label, binary or multidimensional.  ...  We focus on explanations built from relations of (critical and potential) influence between variables, indicating the reasons for classifications, rather than any probabilistic information.  ...  Explaining Bayesian Classifiers In this section we define a novel notion of counterfactual explanations for (an abstract representation of) BCs.  ... 
doi:10.24963/ijcai.2020/63 dblp:conf/ijcai/AlbiniRBT20 fatcat:kdpixr25nng3xk4w5r5urhbiua

An Interpretable Probabilistic Approach for Demystifying Black-box Predictive Models [article]

Catarina Moreira and Yu-Liang Chou and Mythreyi Velmurugan and Chun Ouyang and Renuka Sindhgatta and Peter Bruza
2020 arXiv   pre-print
The framework supports extracting a Bayesian network as an approximation of the black-box model for a specific prediction.  ...  In this paper, we propose a novel approach underpinned by an extended framework of Bayesian networks for generating post hoc interpretations of a black-box predictive model.  ...  Probabilistic graphical model The literature of interpretable methods for explainable AI based on probabilistic graphical models (PGM) is mostly dominated by models based on counterfactual reasoning in  ... 
arXiv:2007.10668v1 fatcat:j2i4qvqqnvhlfavroi5s5kvm3a

Influence-Driven Explanations for Bayesian Network Classifiers [article]

Antonio Rago, Emanuele Albini, Pietro Baroni, Francesca Toni
2021 arXiv   pre-print
We focus on explanations for discrete Bayesian network classifiers (BCs), targeting greater transparency of their inner workings by including intermediate variables in explanations, rather than just the  ...  , e.g., IDXs may be dialectical or counterfactual.  ...  This material should not be construed as an individual recommendation for any particular client and is not intended as a recommendation of particular securities, nancial instruments or strategies for a  ... 
arXiv:2012.05773v3 fatcat:3gt67wdyh5frlfbbh4fwpfiyyi

Uncertainty Estimation and Out-of-Distribution Detection for Counterfactual Explanations: Pitfalls and Solutions [article]

Eoin Delaney, Derek Greene, Mark T. Keane
2021 arXiv   pre-print
Whilst an abundance of techniques have recently been proposed to generate counterfactual explanations for the predictions of opaque black-box systems, markedly less attention has been paid to exploring  ...  the uncertainty of these generated explanations.  ...  Acknowledgements This publication has emanated from research conducted with the financial support of (i) Science Foundation Ireland (SFI) to the Insight Centre for Data Analytics under Grant Number 12/  ... 
arXiv:2107.09734v1 fatcat:ungfgqe5ebe4nj6ngmwyd5in74

Drug discovery with explainable artificial intelligence [article]

José Jiménez-Luna, Francesca Grisoni, Gisbert Schneider
2020 arXiv   pre-print
There is a demand for 'explainable' deep learning methods to address the need for a new narrative of the machine language of the molecular sciences.  ...  Deep learning bears promise for drug discovery, including advanced image analysis, prediction of molecular structure and function, and automated generation of innovative chemical entities with bespoke  ...  This methodology is, therefore, related to both anchors and counterfactual search approaches.  ... 
arXiv:2007.00523v2 fatcat:vwbm5ctaengetbsrkqjf54hoei

Drug discovery with explainable artificial intelligence

José Jiménez-Luna, Francesca Grisoni, Gisbert Schneider
2020 Nature Machine Intelligence  
Given the current pace of AI in drug discovery and related fields, there will be an increased demand for methods that help us understand and interpret the underlying models.  ...  Providing informative explanations alongside the mathematical models aims to (1) render the underlying decision-making process  ...  This methodology is related to both anchors and counterfactual search approaches.  ... 
doi:10.1038/s42256-020-00236-4 fatcat:nlkwpc2jvvhcblmiulbdzzxaiq

Causal Interpretability for Machine Learning – Problems, Methods and Evaluation [article]

Raha Moraffah, Mansooreh Karami, Ruocheng Guo, Adrienne Raglin, Huan Liu
2020 arXiv   pre-print
Moreover, to generate more human-friendly explanations, recent work on interpretability tries to answer questions related to causality such as "Why does this model makes such decisions?"  ...  In addition, this survey provides in-depth insights into the existing evaluation metrics for measuring interpretability, which can help practitioners understand for what scenarios each evaluation metric  ...  ACKNOWLEDGEMENTS We would like to thank Andre Harrison for helpful comments.  ... 
arXiv:2003.03934v3 fatcat:awzv47nmv5aqtl4j5asmp5v7zq

A Survey on the Explainability of Supervised Machine Learning [article]

Nadia Burkart, Marco F. Huber
2020 arXiv   pre-print
., artificial neural networks have a high accuracy but humans often perceive the models as black boxes. Insights about the decision making are mostly opaque for humans.  ...  We conduct a state-of-the-art survey that reviews past and recent explainable SML approaches and classifies them according to the introduced definitions.  ...  & Regression Tree-based Ustun and Rudin (2016) Bayesian Network Classification Bayesian Network Friedman et al. (1997) Table 3 : 3 Overview of interpretable by design approaches Approach Learning  ... 
arXiv:2011.07876v1 fatcat:ccquewit2jam3livk77l5ojnqq

A Survey on the Explainability of Supervised Machine Learning

Nadia Burkart, Marco F. Huber
2021 The Journal of Artificial Intelligence Research  
., artificial neural networks have a high accuracy but humans often perceive the models as black boxes. Insights about the decision making are mostly opaque for humans.  ...  We conduct a state-of-the-art survey that reviews past and recent explainable SML approaches and classifies them according to the introduced definitions.  ...  They are designed to model causal relations in the real world. Bayesian networks do not provide a logical but a probabilistic output (Charniak, 1991) .  ... 
doi:10.1613/jair.1.12228 fatcat:nd3hfatjknhexb5eabklk657ey

The Uncertainty of Counterfactuals in Deep Learning

Katherine Elizabeth Brown, Doug Talbert, Steve Talbert
2021 Proceedings of the ... International Florida Artificial Intelligence Research Society Conference  
from that training data for a deep neural network.  ...  Counterfactuals have become a useful tool for explainable Artificial Intelligence (XAI).  ...  Experiment 2: One-Class Support Vector Machine Based upon preliminary results, we designed two additional experiments to ascertain the nature of counterfactuals in relation to the original dataset.  ... 
doi:10.32473/flairs.v34i1.128795 fatcat:qnee5vhfvffnfac3wthrfcu44m

Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers [article]

Divyat Mahajan, Chenhao Tan, Amit Sharma
2020 arXiv   pre-print
Our experiments on Bayesian networks and the widely used "Adult-Income" dataset show that our proposed methods can generate counterfactual explanations that better satisfy feasibility constraints than  ...  For explanations of ML models in critical domains such as healthcare and finance, counterfactual examples are useful for an end-user only to the extent that perturbation of feature inputs is feasible in  ...  C Implementation Details and Results on Bayesian Networks and Adult Dataset Here we provide implementation details and additional results on the Bayesian network and Adult datasets.  ... 
arXiv:1912.03277v3 fatcat:hzcnuducafhf5jaeag6kx6chpm

Medical idioms for clinical Bayesian network development

Evangelia Kyrimi, Mariana Raniere Neves, Scott McLachlan, Martin Neil, William Marsh, Norman Fenton
2020 Journal of Biomedical Informatics  
Bayesian Networks (BNs) are graphical probabilistic models that have proven popular in medical applications.  ...  While numerous medical BNs have been published, most are presented fait accompli without explanation of how the network structure was developed or justification of why it represents the correct structure  ...  We would also like to thank Dr Kudakwashe Dube for his helpful comments.  ... 
doi:10.1016/j.jbi.2020.103495 pmid:32619692 fatcat:hnmvyqhha5frfdfyirtfemxgoe

On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning [article]

Eoin M. Kenny, Mark T. Keane
2020 arXiv   pre-print
This paper advances a novel method for generating plausible counterfactuals (and semifactuals) for black box CNN classifiers doing computer vision.  ...  The present method, called PlausIble Exceptionality-based Contrastive Explanations (PIECE), modifies all exceptional features in a test image to be normal from the perspective of the counterfactual class  ...  In this paper, we advance a new technique for XAI using counterfactual and semi-factual explanations, applied to deep learning models [i.e., convolutional neural networks (CNNs)].  ... 
arXiv:2009.06399v1 fatcat:2bmz34g2hbagfoampy7oye2ipe

Getting a CLUE: A Method for Explaining Uncertainty Estimates [article]

Javier Antorán, Umang Bhatt, Tameem Adel, Adrian Weller, José Miguel Hernández-Lobato
2021 arXiv   pre-print
We address this gap by proposing a novel method for interpreting uncertainty estimates from differentiable probabilistic models, like Bayesian Neural Networks (BNNs).  ...  We validate CLUE through 1) a novel framework for evaluating counterfactual explanations of uncertainty, 2) a series of ablation experiments, and 3) a user study.  ...  Probabilistic backpropagation for scalable learning of Bayesian neural networks.  ... 
arXiv:2006.06848v2 fatcat:5chka3x76ngoxev5y7umed42n4

A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence

Ilia Stepin, Jose M. Alonso, Alejandro Catala, Martin Pereira-Farina
2021 IEEE Access  
Then, we report the state-of-the-art computational frameworks for contrastive and counterfactual explanation generation.  ...  To disclose the reasoning behind such algorithms, their output can be explained by means of so-called evidence-based (or factual) explanations.  ...  introduce an argumentation-based framework for social network management [149] .  ... 
doi:10.1109/access.2021.3051315 fatcat:3zupk4jfdncuvj5osdkk7rykdm
« Previous Showing results 1 — 15 out of 1,215 results