Filters








1,967 Hits in 7.1 sec

Explaining Models by Propagating Shapley Values of Local Components [article]

Hugh Chen and Scott Lundberg and Su-In Lee
2019 arXiv   pre-print
In order to make these complex models explainable, we present DeepSHAP for mixed model types, a framework for layer wise propagation of Shapley values that builds upon DeepLIFT (an existing approach for  ...  We show that in addition to being able to explain neural networks, this new framework naturally enables attributions for stacks of mixed models (e.g., neural network feature extractor into a tree model  ...  In this paper, we focus on SHAP values (Lundberg and Lee 2017) -Shapley values (Shapley 1953 ) with a conditional expectation of the model prediction as the set function.  ... 
arXiv:1911.11888v1 fatcat:cnld7vfldna7pix74enxwd3hem

An unexpected unity among methods for interpreting model predictions [article]

Scott Lundberg, Su-In Lee
2016 arXiv   pre-print
Recently, several methods have been proposed for interpreting predictions from complex models by estimating the importance of input features.  ...  This representation is optimal, in the sense that it is the only set of additive values that satisfies important properties.  ...  Sample Efficiency and the Importance of the Shapley Kernel Connecting Shapley values from game theory with locally weighted linear models brings advantages to both concepts.  ... 
arXiv:1611.07478v3 fatcat:olnmbpcuwjbaba6zisc6rvb22i

A Unified Approach to Interpreting Model Predictions [article]

Scott Lundberg, Su-In Lee
2017 arXiv   pre-print
Its novel components include: (1) the identification of a new class of additive feature importance measures, and (2) theoretical results showing there is a unique solution in this class with a set of desirable  ...  However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as ensemble or deep learning models, creating a tension between  ...  Acknowledgements This work was supported by a National Science Foundation (NSF) DBI-135589, NSF CAREER DBI-155230, American Cancer Society 127332-RSG-15-097-01-TBG, National Institute of Health (NIH) AG049196  ... 
arXiv:1705.07874v2 fatcat:5xxg6yvljrhf5mtssjo6jgohqu

Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation [article]

Marco Ancona, Cengiz Öztireli, Markus Gross
2019 arXiv   pre-print
In this work, by leveraging recent results on uncertainty propagation, we propose a novel, polynomial-time approximation of Shapley values in deep neural networks.  ...  The problem of explaining the behavior of deep neural networks has recently gained a lot of attention.  ...  Contrarily to global interpretability, which aims at explaining the general model behavior, local interpretability scope is restricted at explaining a particular decision for a given model and input instance  ... 
arXiv:1903.10992v4 fatcat:xqa6j3fr6bgfpdhc7plr64g4km

Explaining a Series of Models by Propagating Shapley Values [article]

Hugh Chen, Scott M. Lundberg, Su-In Lee
2021 arXiv   pre-print
Here, we present DeepSHAP, a tractable method to propagate local feature attributions through complex series of models based on a connection to the Shapley value.  ...  However, current methods are limited because they are extremely expensive to compute or are not capable of explaining a distributed series of models where each model is owned by a separate institution.  ...  Then, it exactly computes the Shapley value for each set and propagates it linearly to each component of the set. Figure 8 : 8 Demonstrating bias of a single baseline for IME.  ... 
arXiv:2105.00108v2 fatcat:aikiiv4qcbabtpcfnwc3jrmce4

Explanation of neural language models using SHAP

Emese Vastag
2022 Zenodo  
Explainability of machine learning models is increasing in importance.  ...  This explanation method is based on Shapley values known from game theory, Lundberg et al. showed that using these values is the unique solution among local feature attribution methods while satisfying  ...  Acknowledgements The research presented in this paper, carried out by the Institute for Computer  ... 
doi:10.5281/zenodo.6596106 fatcat:4q5zq4ykejb2tm5iz6744kcmta

Interpretability in deep learning for finance: a case study for the Heston model [article]

Damiano Brigo, Xiaoshan Huang, Andrea Pallavicini, Haitz Saez de Ocariz Borde
2021 arXiv   pre-print
We investigate the capability of local strategies and global strategies coming from cooperative game theory to explain the trained neural networks, and we find that global strategies such as Shapley values  ...  In this paper we focus on the calibration process of a stochastic volatility model, a subject recently tackled by deep learning algorithms.  ...  by explaining the local model g.  ... 
arXiv:2104.09476v1 fatcat:sqieu4if5zfzvle2pduot3o57i

Explainable AI: Using Shapley Value to Explain Complex Anomaly Detection ML-Based Systems [chapter]

Jinying Zou, Ovanes Petrosian
2020 Frontiers in Artificial Intelligence and Applications  
The novelty of our research is that by using the Shapley value and special coding techniques we managed to evaluate or explain the contribution of both a single event and a grouped sequence of events of  ...  In particular, we use the Shapley value approach from cooperative game theory to explain the outcome or solution of two anomaly-detection algorithms: Decision tree and DeepLog.  ...  the use of the Shapley value: • LIME is a method that interprets individual model predictions based on building a local approximation the model around a given prediction [17] . • DeepLIFT (Deep Learning  ... 
doi:10.3233/faia200777 fatcat:rxke326zhrb3dlbicme4osyslu

Shapley explainability on the data manifold [article]

Christopher Frye, Damien de Mijolla, Tom Begley, Laurence Cowton, Megan Stanley, Ilya Feige
2021 arXiv   pre-print
One solution, based on generative modelling, provides flexible access to data imputations; the other directly learns the Shapley value-function, providing performance and stability at the cost of flexibility  ...  However, general implementations of Shapley explainability make an untenable assumption: that the model's features are uncorrelated.  ...  the Shapley framework for model explainability, define on-manifold Shapley values precisely, and introduce global explanations that obey the Shapley axioms. 2.1 Shapley values for model explainability  ... 
arXiv:2006.01272v4 fatcat:7yd7j4zir5cg7lklaecg2ycfuy

Explaining Graph Neural Networks with Structure-Aware Cooperative Games [article]

Shichang Zhang, Neil Shah, Yozen Liu, Yizhou Sun
2022 arXiv   pre-print
Explaining predictions made by machine learning models is important and has attracted increased interest.  ...  We purport that the Shapley value is a non-ideal choice for graph data because it is by definition not structure-aware.  ...  Surrogate-based methods work by approximating a complex model using an explainable model locally.  ... 
arXiv:2201.12380v3 fatcat:4rsfscw25rbjxa2c3y54p72tje

Interpretable Machine Learning – A Brief History, State-of-the-Art and Challenges [article]

Christoph Molnar, Giuseppe Casalicchio, Bernd Bischl
2020 arXiv   pre-print
IML methods either directly analyze model components, study sensitivity to input perturbations, or analyze local or global surrogate approximations of the ML model.  ...  A further challenge is a missing rigorous definition of interpretability, which is accepted by the community.  ...  Popular local IML methods are Shapley values [69, 112] and counterfactual explanations [122, 20, 81, 116, 118] .  ... 
arXiv:2010.09337v1 fatcat:mldxlgybvzbm3d6hikcywkkhbm

Algorithms to estimate Shapley value feature attributions [article]

Hugh Chen and Ian C. Covert and Scott M. Lundberg and Su-In Lee
2022 arXiv   pre-print
Feature attributions based on the Shapley value are popular for explaining machine learning models; however, their estimation is complex from both a theoretical and computational standpoint.  ...  For the model-agnostic approximations, we benchmark a wide class of estimation approaches and tie them to alternative yet equivalent characterizations of the Shapley value.  ...  Another method to estimate baseline Shapley values for deep models is Deep Approximate Shapley Propagation (DASP) [33] . DASP utilizes uncertainty propagation to estimate baseline Shapley values.  ... 
arXiv:2207.07605v1 fatcat:lay32kevkzcl7cqhw6rzlnwdwm

Group based centrality for immunization of complex networks

Chandni Saxena, M.N. Doja, Tanvir Ahmad
2018 Physica A: Statistical Mechanics and its Applications  
We propose a group based game theoretic payoff division approach, by employing Shapley value to assign the surplus acquired by participating nodes in different groups through the positional power and functional  ...  We tag these key nodes as Shapley Value based Information Delimiters (SVID).  ...  The Shapley value formulated in the proposed algorithm (Algorithm 1) takes care of the articulation points of the graph who are benefited by location being information spreaders between the components  ... 
doi:10.1016/j.physa.2018.05.107 fatcat:ffa5mm3wl5ejzcx5lbm4cidcba

Toward Explainable AI for Regression Models [article]

Simon Letzgus, Patrick Wagner, Jonas Lederer, Wojciech Samek, Klaus-Robert Müller, Gregoire Montavon
2021 arXiv   pre-print
In addition to the impressive predictive power of machine learning (ML) models, more recently, explanation methods have emerged that enable an interpretation of complex non-linear learning models such  ...  While such Explainable AI (XAI) techniques have reached significant popularity for classifiers, so far little attention has been devoted to XAI for regression models (XAIR).  ...  Hence, the explanation may be biased by this local scope and it may therefore fail to integrate important components of the decision process.  ... 
arXiv:2112.11407v1 fatcat:275m3l5e7nc4npikkdsg5pdqzi

Nondestructive Testing Based Compressive Bearing Capacity Prediction Method for Damaged Wood Components of Ancient Timber Buildings

Lihong Chang, Wei Qian, Hao Chang, Xiaohong Chang, Taoping Ye
2021 Materials  
The measured values of wood components with different defects were consistent with the theoretical values predicted by the wave-drag modulus, which can effectively improve the prediction of residual bearing  ...  the internal damage condition of wood components.  ...  Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/ma14195512 pmid:34639911 pmcid:PMC8509211 fatcat:rma4wfdzx5gidjlvgpvacg6rom
« Previous Showing results 1 — 15 out of 1,967 results