Filters








1,013,203 Hits in 9.3 sec

Understanding Global Feature Contributions With Additive Importance Measures [article]

Ian Covert, Scott Lundberg, Su-In Lee
2020 arXiv   pre-print
To assess the role of individual input features in a global sense, we explore the perspective of defining feature importance through the predictive power associated with each feature.  ...  We introduce two notions of predictive power (model-based and universal) and formalize this approach with a framework of additive importance measures, which unifies numerous methods in the literature.  ...  Additive Importance Measures In certain very simple cases, features contribute predictive power in an additive manner.  ... 
arXiv:2004.00668v2 fatcat:5js5turxy5gmpledm2uqsafmxe

Marginal Contribution Feature Importance – an Axiomatic Approach for The Natural Case [article]

Amnon Catav, Boyang Fu, Jason Ernst, Sriram Sankararaman, Ran Gilad-Bachrach
2020 arXiv   pre-print
Marginal Contribution Feature Importance (MCI).  ...  While it is common to make the distinction between local scores that focus on individual predictions and global scores that look at the contribution of a feature to the model, another important division  ...  Global-model (the engineer scenario): Another common use case for feature importance is in understanding a specific model.  ... 
arXiv:2010.07910v1 fatcat:lymxps4s2bhefp6jq6m7wj5oju

X-SHAP: towards multiplicative explainability of Machine Learning [article]

Luisa Bouneder, Yannick Léo, Aimé Lachapelle
2020 arXiv   pre-print
We test the method on various datasets and propose a set of techniques based on individual X-SHAP contributions to build aggregated multiplicative contributions and to capture multiplicative feature importance  ...  This paper introduces X-SHAP, a model-agnostic method that assesses multiplicative contributions of variables for both local and global predictions.  ...  We propose the X-SHAP toolbox, a new set of techniques used to understand global and segmented model structure by aggregating multiple local contributions,  ... 
arXiv:2006.04574v2 fatcat:34wyu6am55fx3lz6pdhqe5dzua

On Understanding the Influence of Controllable Factors with a Feature Attribution Algorithm: a Medical Case Study [article]

Veera Raghava Reddy Kovvuri, Siyuan Liu, Monika Seisenberger, Berndt Müller, Xiuyi Fan
2022 arXiv   pre-print
Feature attribution XAI algorithms enable their users to gain insight into the underlying patterns of large datasets through their feature importance calculation.  ...  Experimental results show that with CAFA, we are able to exclude influences from uncontrollable features in our explanation while keeping the full dataset for prediction.  ...  Knowing the relative importance of these features makes little contribution to clinical decision making.  ... 
arXiv:2203.12701v1 fatcat:7gxipf5uyffurir5ynrpiw4a24

Comparison and Explanation of Forecasting Algorithms for Energy Time Series

Yuyi Zhang, Ruimin Ma, Jing Liu, Xiuxiu Liu, Ovanes Petrosian, Kirill Krinkin
2021 Mathematics  
features.  ...  In addition, as an innovation, we introduce the Explainable AI method (SHAP) to explain models with excellent performance indicators, thereby strengthening its trust and transparency; (3) The results show  ...  In addition, SHAP can also output the features that are most relevant to the most important features and the relationship between the two, which is more helpful for us to understand the internal workings  ... 
doi:10.3390/math9212794 fatcat:rdle7qspdvcsdkql6sqmgji3dq

Impact of Accuracy on Model Interpretations [article]

Brian Liu, Madeleine Udell
2020 arXiv   pre-print
Doing so requires an understanding of how the accuracy of a model impacts the quality of standard interpretation tools.  ...  We propose two metrics to quantify the quality of an interpretation and design an experiment to test how these metrics vary with model accuracy.  ...  It is important to note that LIME local explanations are not additive and can not be combined into a global feature importance score.  ... 
arXiv:2011.09903v1 fatcat:hioihasufnhpjjddbe5oxi774u

Exploring Local Explanations of Nonlinear Models Using Animated Linear Projections [article]

Nicholas Spyrison, Dianne Cook
2022 arXiv   pre-print
XAI attempts to shed light on how models use predictors to arrive at a prediction with local explanations, a point estimate of the linear feature importance in the vicinity of one instance.  ...  These can be considered linear projections and can be further explored to understand better the interactions between features used to make predictions across the predictive model surface.  ...  However, increasing the contribution of a variable with low importance would rotate both features out of the frame. The contribution from def will be varied to contrast with offensive skills.  ... 
arXiv:2205.05359v1 fatcat:yrpblwzxpjd2jpy76paux4vh7m

Global Model Interpretation via Recursive Partitioning [article]

Chengliang Yang, Anand Rangarajan, Sanjay Ranka
2018 arXiv   pre-print
In this work, we propose a simple but effective method to interpret black-box machine learning models globally.  ...  In general, our work makes it easier and more efficient for human beings to understand machine learning models.  ...  A er ge ing the contribution matrix that measures the importance of each semantic category for the scene categories for each image, we could run our Global Interpretation via Recursive Partitioning (GIRP  ... 
arXiv:1802.04253v2 fatcat:vtu6tv5nwvbgdbrx5sgu3ermgi

Shapley-Lorenz eXplainable Artificial Intelligence

Paolo Giudici, Emanuela Raffinetti
2020 Expert systems with applications  
In this paper, we provide a global explainable AI method which is based on Lorenz decompositions, thus extending previous contributions based on variance decompositions.  ...  This allows the resulting Shapley-Lorenz decomposition to be more generally applicable, and provides a unifying variable importance criterion that combines predictive accuracy with explainability, using  ...  In relation with this, they are not suited to understand which variables are important, at the overall level.  ... 
doi:10.1016/j.eswa.2020.114104 fatcat:ijtlwjfje5hlnizwpzwwmjw5xq

Case-study Led Investigation of Explainable AI (XAI) to Support Deployment of Prognostics in the industry

Omnia Amin, Blair Brown, Bruce Stephen, Stephen McArthur
2022 Proceedings of the European Conference of the Prognostics and Health Management Society (PHME)  
The use of XAI will not only help in understanding how these ML models work, but also describe the most important features contributing to predicted degradation of the nuclear generation asset.  ...  How ML model outputs convey explanations to stakeholders is important, so these explanations must be in human (and technical domain related) understandable terms.  ...  Shapley Additive exPlanations (SHAP) tool computes feature importance by computing the contribution of each feature to the output obtained.  ... 
doi:10.36001/phme.2022.v7i1.3336 fatcat:njg5ldqkwfga3k4kfpknvvpoxa

An Interpretable Hand-Crafted Feature-Based Model for Atrial Fibrillation Detection

Rahimeh Rouhi, Marianne Clausel, Julien Oster, Fabien Lauer
2021 Frontiers in Physiology  
The results show the effectiveness and efficiency of SHapley Additive exPlanations (SHAP) technique along with Random Forest (RF) for the classification of the Electrocardiogram (ECG) signals for AF detection  ...  with a mean F-score of 0.746 compared to 0.706 for a technique based on the same features based on a cascaded SVM approach.  ...  FEATURE IMPORTANCE Global Explanation and Feature Selection Global explanation aims to provide an understanding on ML models and highlight the most important parameters or learned representations along  ... 
doi:10.3389/fphys.2021.657304 pmid:34054575 pmcid:PMC8155476 fatcat:3kdzh3sf35g7jmntb7likuhgsq

A United States Fair Lending Perspective on Machine Learning

Patrick Hall, Benjamin Cox, Steven Dickerson, Arjun Ravi Kannan, Raghu Kulkarni, Nicholas Schmidt
2021 Frontiers in Artificial Intelligence  
ACKNOWLEDGMENTS We would like to acknowledge the contributions of Alexey Miroshnikov and Kostas Kotsiopoulos from Discover Financial Services for the section on grouping features for interpretability.  ...  , and by including important features that contribute less toward disparate impact.  ...  et al., 2020) . 18 With this information, a model builder can structure a search for alternative models more efficiently by removing features with low importance and large contributions to disparate impact  ... 
doi:10.3389/frai.2021.695301 pmid:34164616 pmcid:PMC8216763 fatcat:bahimakucfh2penepnxplqqrum

A Comparative Study of Additive Local Explanation Methods based on Feature Influences

Emmanuel Doumard, Julien Aligon, Elodie Escriva, Jean-Baptiste Excoffier, Paul Monsarrat, Chantal Soulé-Dupuy
2022 International Workshop on Data Warehousing and OLAP  
Local additive explanation methods are increasingly used to understand the predictions of complex Machine Learning (ML) models.  ...  The most used additive methods, SHAP and LIME, suffer from limitations that are rarely measured in the literature.  ...  We also thank the French National Association for Research and Technology (ANRT) and Kaduceo company for providing us with PhD grants (no. 2020/0964).  ... 
dblp:conf/dolap/DoumardAEEMS22 fatcat:exmkveyifbdglhylvfr7gjbasy

Extraction of human understandable insight from machine learning model for diabetes prediction

Tsehay Admassu Assegie, Thulasi Karpagam, Radha Mothukuri, Ravulapalli Lakshmi Tulasi, Minychil Fentahun Engidaye
2022 Bulletin of Electrical Engineering and Informatics  
and permutation feature importance by employing extreme boosting (XGBoost).  ...  Experiment is conducted on diabetes dataset with the aim of investigating the most influencing feature on model output.  ...  SHAP provides local and global effect of diabetes features while permutation based feature importance with XGBoost and LIME only provide the global influence of feature to model output.  ... 
doi:10.11591/eei.v11i2.3391 fatcat:rddrmcn7vjb6zoxkqjfj5wvxqy

Interpretable Additive Recurrent Neural Networks For Multivariate Clinical Time Series [article]

Asif Rahman, Yale Chang, Jonathan Rubin
2021 arXiv   pre-print
We want accurate time series models where users can understand the contribution of individual input features.  ...  I-RNN provides explanations in the form of global and local feature importances comparable to highly intelligible models like decision trees trained on hand-engineered features while significantly outperforming  ...  We want accurate time series models that are ity we mean to understand the contribution of individual features interpretable where users can understand  ... 
arXiv:2109.07602v1 fatcat:nsou6noetzhe7cqyj4326ye3jm
« Previous Showing results 1 — 15 out of 1,013,203 results