Filters








536 Hits in 2.0 sec

Counterfactual Shapley Additive Explanations [article]

Emanuele Albini, Jason Long, Danial Dervovic, Daniele Magazzeni
2021 arXiv   pre-print
Concretely, we propose a variant of SHAP, CoSHAP, that uses counterfactual generation techniques to produce a background dataset for use within the marginal (a.k.a. interventional) Shapley value framework  ...  Feature attributions are a common paradigm for model explanations due to their simplicity in assigning a single numeric score for each input feature to a model.  ...  Secondly, investigating additional metrics for the evaluation of a feature attribution in counterfactual terms would also be of interest.  ... 
arXiv:2110.14270v2 fatcat:fvefbc5y3nbmvegfjl46xqew5u

Rational Shapley Values [article]

David S. Watson
2021 arXiv   pre-print
., counterfactuals). In this paper, I introduce rational Shapley values, a novel XAI method that synthesizes and extends these seemingly incompatible approaches in a rigorous, flexible manner.  ...  By pairing the distribution of random variables with the appropriate reference class for a given explanation task, I illustrate through theory and experiments how user goals and knowledge can inform and  ...  Counterfactual explanations, unlike Shapley values, do not produce feature weights.  ... 
arXiv:2106.10191v1 fatcat:auodnmccejfqjew2c5hba7n6mi

Score-Based Explanations in Data Management and Machine Learning [article]

Leopoldo Bertossi
2020 arXiv   pre-print
More specifically, we consider explanations for query answers in databases, and for results from classification models. The described approaches are mostly of a causal and counterfactual nature.  ...  We describe some approaches to explanations for observed outcomes in data management and machine learning.  ...  ., where his interest in explanations in ML started.  ... 
arXiv:2007.12799v2 fatcat:yue3oo7hl5hf5gjvuj7ps6sufy

Critical Empirical Study on Black-box Explanations in AI [article]

Jean-Marie John-Mathews
2021 arXiv   pre-print
Secondly, we show the importance of tested behavioral indicators, in addition to self-reported perceived indicators, to provide a more comprehensive view of the dimensions of interpretability.  ...  This paper provides empirical concerns about post-hoc explanations of black-box ML models, one of the major trends in AI explainability (XAI), by showing its lack of interpretability and societal consequences  ...  counterfactual explanations do not significantly increase understanding contrary to transparent and Shapley explanations (Table 2) .  ... 
arXiv:2109.15067v1 fatcat:62jvt2xwczg4jccdfuastbyrle

Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach [article]

Carlos Fernández-Loría, Foster Provost, Xintian Han
2021 arXiv   pre-print
We examine counterfactual explanations for explaining the decisions made by model-based AI systems.  ...  We demonstrate this with several concise examples and three detailed case studies that compare the counterfactual approach with SHAP to illustrate various conditions under which counterfactual explanations  ...  Table 3 : 3 Shapley values for C 2 and counterfactual explanations for this decision. Table 4 : 4 Shapley values for C 3 and counterfactual explanations for this decision.  ... 
arXiv:2001.07417v5 fatcat:76n6mxrnonca7myaujflzu7rdi

Provably Robust Model-Centric Explanations for Critical Decision-Making [article]

Cecilia G. Morales, Nicholas Gisolfi, Robert Edman, James K. Miller, Artur Dubrawski
2021 arXiv   pre-print
, popular data-centric explanation tools in Artificial Intelligence (AI).  ...  We compare and contrast these methods, and show that data-centric methods may yield brittle explanations of limited practical utility.  ...  Common explanatory tools, including Local Interpretable Model-Agnostic Explanations (LIME) (Ribeiro, Singh, and Guestrin 2016) and Shapley Additive Explanations (SHAP) (Lundberg and Lee 2017) , are  ... 
arXiv:2110.13937v1 fatcat:gqjjonbedneuxdtldtbmrl4nia

Explainable AI meets Healthcare: A Study on Heart Disease Dataset [article]

Devam Dave, Het Naik, Smiti Singhal, Pankesh Patel
2020 arXiv   pre-print
With the increasingly indispensable role of AI in healthcare, there are growing concerns over the lack of transparency and explainability in addition to potential bias encountered by predictions of the  ...  (LIME) and Shapley Additive Explanations (SHAP).  ...  SHapley Additive exPlanations (SHAP) SHAP [5] method increases the transparency of the model. It can be understood well using the game theory concept from where the concept arises.  ... 
arXiv:2011.03195v1 fatcat:2wdmohrnxnhvhk5c2pohcdlbty

Explainable Machine Learning in Deployment [article]

Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José M. F. Moura, Peter Eckersley
2020 arXiv   pre-print
Explainable machine learning offers the potential to provide stakeholders with insights into model behavior by using various methods such as feature importance scores, counterfactual explanations, or influential  ...  There is thus a gap between explainability in practice and the goal of transparency, since explanations primarily serve internal stakeholders rather than external ones.  ...  Counterfactual Explanations Counterfactual explanations are techniques that explain individual predictions by providing a means for recourse.  ... 
arXiv:1909.06342v4 fatcat:rw2e7lkfazd2lipawpilhuyy6e

Some Critical and Ethical Perspectives on the Empirical Turn of AI Interpretability [article]

Jean-Marie John-Mathews
2021 arXiv   pre-print
Using a randomized study, we experimentally show that the empirical and liberal turn of the production of explanations tends to select AI explanations with a low denunciatory power.  ...  We also show that the denunciatory power of AI explanations is highly dependent on the context in which the explanation takes place, such as the gender or education level of the person to whom the explication  ...  In this case, for this last section, counterfactual explanations are filtered out so that only Shapley explanations and transparent algorithms are kept.  ... 
arXiv:2109.09586v1 fatcat:3udtpqpukfhrlm3trvu5ea6y5e

Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications [article]

Yu-Liang Chou and Catarina Moreira and Peter Bruza and Chun Ouyang and Joaquim Jorge
2021 arXiv   pre-print
biased explanations.  ...  A specific class of algorithms that have the potential to provide causability are counterfactuals.  ...  SHAP -SHapley Additive exPlanations The SHAP (SHapley Additive exPlanations) is an explanation method that uses Shapley values [70] from coalitional game theory to fairly distribute the gain among players  ... 
arXiv:2103.04244v2 fatcat:uqs3y7v7hrhtxkh2ltl4wluyqe

Measurable Counterfactual Local Explanations for Any Classifier [article]

Adam White, Artur d'Avila Garcez
2019 arXiv   pre-print
A system called CLEAR: Counterfactual Local Explanations via Regression, is introduced and evaluated.  ...  CLEAR generates w-counterfactual explanations that state minimum changes necessary to flip a prediction's classification.  ...  Hence, an additional b-counterfactual explanation for Mr Jones might be that he would also get the loan if his annual salary was $33,000 and he had been employed for more than 5 years.  ... 
arXiv:1908.03020v2 fatcat:vqcb6s27gba7fpy6pntitxmhfm

DisCERN:Discovering Counterfactual Explanations using Relevance Features from Neighbourhoods

Nirmalie Wiratunga, Anjana Wijekoon, Ikechukwu Nkisi-Orji, Kyle Martin, David Corsar, Chamath Palihawadana
2022 Zenodo  
Our results demonstrate that DisCERN is an effective strategy to minimise actionable changes necessary to create good counterfactual explanations. Index  ...  Abstract—Counterfactual explanations focus on "actionable knowledge" to help end-users understand how a machine learning outcome could be changed to a more desirable outcome.  ...  SHapley Additive exPlanation (SHAP) SHAP [10] is a model-agnostic feature relevance explainer with theoretical guarantees about consistency and local accuracy from game theory and based on the shapley  ... 
doi:10.5281/zenodo.5837861 fatcat:kkkloi3lnvd4vc5ra6hd7imtku

Score-Based Explanations in Data Management and Machine Learning: An Answer-Set Programming Approach to Counterfactual Analysis [article]

Leopoldo Bertossi
2021 arXiv   pre-print
We describe some recent approaches to score-based explanations for query answers in databases and outcomes from classification models in machine learning.  ...  Special emphasis is placed on declarative approaches based on answer-set programming to the use of counterfactual reasoning for score specification and computation.  ...  ., where his interest in explanations in ML started. Part of this work was funded by ANID -Millennium Science Initiative Program -Code ICN17002.  ... 
arXiv:2106.10562v2 fatcat:pb63d7kyo5e6jn6yu5mipyproy

Problems with Shapley-value-based explanations as feature importance measures [article]

I. Elizabeth Kumar, Suresh Venkatasubramanian, Carlos Scheidegger, Sorelle Friedler
2020 arXiv   pre-print
We also draw on additional literature to argue that Shapley values do not provide explanations which suit human-centric goals of explainability.  ...  Justification for these methods rests on two pillars: their desirable mathematical properties, and their applicability to specific motivations for explanations.  ...  In this sense, we can interpret Shapley-based explanations as a contrastive statement where the outcome to be explained is v(D) and the foil -the counterfactual case which did not happen -is implicitly  ... 
arXiv:2002.11097v2 fatcat:ivmcerf35vhxhai6htacjluegi

Explaining by Removing: A Unified Framework for Model Explanation [article]

Ian Covert, Scott Lundberg, Su-In Lee
2020 arXiv   pre-print
To anchor removal-based explanations in cognitive psychology, we show that feature removal is a simple application of subtractive counterfactual reasoning.  ...  Researchers have proposed a wide variety of model explanation approaches, but it remains unclear how most methods are related or when one method is preferable to another.  ...  Removal-based explanations can provide more insight by concisely summarizing the results of many subtractive counterfactuals (e.g., via the Shapley value).  ... 
arXiv:2011.14878v1 fatcat:sz2dtngafjh2hh7cmgib3s2kqq
« Previous Showing results 1 — 15 out of 536 results