A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Evaluating and Aggregating Feature-based Model Explanations
2020
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
A feature-based model explanation denotes how much each input feature contributes to a model's output for a given data point. ...
This paper proposes quantitative evaluation criteria for feature-based explanations: low sensitivity, high faithfulness, and low complexity. ...
Acknowledgements This work is supported by the National Natural Science Foundation of China (61972275) and the Australian Research Council Linkage Project (LP180100750). ...
doi:10.24963/ijcai.2020/413
dblp:conf/ijcai/ZhangWLL20
fatcat:ufjjqpevwzeynbksqjyeuaxf4a
Evaluating and Aggregating Feature-based Model Explanations
[article]
2020
arXiv
pre-print
A feature-based model explanation denotes how much each input feature contributes to a model's output for a given data point. ...
This paper proposes quantitative evaluation criteria for feature-based explanations: low sensitivity, high faithfulness, and low complexity. ...
We thank Pradeep Ravikumar, John Shi, Brian Davis, Kathleen Ruan, Javier Antoran, James Allingham, and Adithya Raghuraman for their comments and help. ...
arXiv:2005.00631v1
fatcat:c4dwohjvm5grnika3wwmy4sbfy
Evaluating Explainable Methods for Predictive Process Analytics: A Functionally-Grounded Approach
[article]
2020
arXiv
pre-print
We apply the proposed metrics to evaluate the performance of LIME and SHAP in interpreting process predictive models built on XGBoost, which has been shown to be relatively accurate in process predictions ...
We conduct the evaluation using three open source, real-world event logs and analyse the evaluation results to derive insights. ...
Explanation Stability Explanation stability will be evaluated in terms of the stability of subsets of most important features and the stability of feature weights. ...
arXiv:2012.04218v1
fatcat:q7vsu6kl7zhpngw3foov67dble
NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks
[article]
2019
arXiv
pre-print
LIME develops multiple interpretable models, each approximating a large neural network on a small region of the data manifold and SP-LIME aggregates the local models to form a global interpretation. ...
Extending this line of research, we propose a simple yet effective method, NormLIME for aggregating local models into global and class-specific interpretations. ...
NormLIME is a method for aggregating and normalizing multiple local explanations and estimating the global relative importance of all features utilized by the model. ...
arXiv:1909.04200v2
fatcat:jfpbk7docbdxlabfnol6bs7dn4
Bayesian Optimization using Pseudo-Points
2020
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
., data points whose objective values are not evaluated) to improve the GP model. ...
BO usually models the objective function by a Gaussian process (GP), and iteratively samples the next data point by maximizing an acquisition function. ...
We thank Pradeep Ravikumar, John Shi, Brian Davis, Kathleen Ruan, Javier Antoran, and James Allingham for their comments. UB acknowledges support from DeepMind and the Leverhulme Trust via the CFI. ...
doi:10.24963/ijcai.2020/417
dblp:conf/ijcai/BhattWM20
fatcat:spsh6xcthbbfvfxbtd3y4copy4
Explanation-Guided Diagnosis of Machine Learning Evasion Attacks
[article]
2021
arXiv
pre-print
Using malware and image classifiers, we conduct comprehensive evaluations across diverse model architectures and complementary feature representations. ...
Evasion accuracy is typically assessed using aggregate evasion rate, and it is an open question whether aggregate evasion rate enables feature-level diagnosis on the effect of adversarial perturbations ...
Acknowledgments We thank our shepherd Giovanni Apruzzese and the anonymous reviewers for their insightful feedback that immensely improved this paper. ...
arXiv:2106.15820v1
fatcat:e4arxiqyrjdcbowx655pbq5wpy
SEEN: Sharpening Explanations for Graph Neural Networks using Explanations from Neighborhoods
[article]
2021
arXiv
pre-print
SEEN provides a simple but effective method to enhance the explanation quality of GNN model outputs, and this method is applicable in combination with most explainability techniques. ...
In this study, we propose a method to improve the explanation quality of node classification tasks that can be applied in a post hoc manner through aggregation of auxiliary explanations from important ...
Acknowledgments and Disclosure of Funding We thank Taehee Lee for his helpful comments. Funding in direct support of this work was obtained from Samsung SDS. ...
arXiv:2106.08532v1
fatcat:kxkjuhmorjf5tfbrcbokl2j33i
A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines
[article]
2021
arXiv
pre-print
In this paper, we propose a technique for aggregating the feature attributions of different explanatory algorithms using Restricted Boltzmann Machines (RBMs) to achieve a more reliable and robust interpretation ...
Several challenging experiments on real-world datasets show that the proposed RBM method outperforms popular feature attribution methods and basic ensemble techniques. ...
To this end, we propose using a model based on Restricted Boltzmann Machines (RBMs), which achieves this goal by aggregating the results (saliency maps) of different feature-based explanation methods in ...
arXiv:2111.07379v1
fatcat:v353gwp3wfccxczkfdvayixsga
An Approach to Aggregating Ensembles of Lazy Learners That Supports Explanation
[chapter]
2002
Lecture Notes in Computer Science
In this paper we present a new technique for aggregation that obtains excellent results and identifies a small number of cases for use in explanation. ...
This new approach might be viewed as a transformation process whereby cases are transformed from their feature based representation to a representation based on the predictions of ensemble members. ...
Future Work Since the key benefit we claim for this technique is its ability to select cases for use in explanation we need to evaluate the usefulness of the cases retrieved. ...
doi:10.1007/3-540-46119-1_32
fatcat:3uszoud6pzafnlcb2rdthg2vau
Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation
[article]
2020
arXiv
pre-print
In this work, we collect visualization maps from multiple layers of the model based on an attribution-based input sampling technique and aggregate them to reach a fine-grained and complete explanation. ...
Comprehensive experiments conducted on shallow and deep models trained on natural and industrial datasets, using both ground-truth and model-truth based evaluation metrics validate our proposed algorithm ...
a block-wise feature aggregation. ...
arXiv:2010.00672v2
fatcat:wqe6gwenvfhthcginmowmvl2u4
Exploring Interpretability for Predictive Process Analytics
[article]
2020
arXiv
pre-print
used to encode event log data to features used by a predictive model. ...
Multiple techniques have been proposed so far which encode the information available in an event log and construct input features required to train a predictive model. ...
Acknowledgement: We particularly thank the authors of the two process monitoring benchmarks [1, 3] for the high quality code they released which allowed us to explore model interpretability for predictive ...
arXiv:1912.10558v3
fatcat:62asvk7hpnfyvikbdzpbzqcgoe
Evaluations and Methods for Explanation through Robustness Analysis
[article]
2021
arXiv
pre-print
Feature based explanations, that provide importance of each feature towards the model prediction, is arguably one of the most intuitive ways to explain a model. ...
In this paper, we establish a novel set of evaluation criteria for such feature based explanations by robustness analysis. ...
for evaluating feature based explanations. ...
arXiv:2006.00442v2
fatcat:aqhdypkdxnd75jgwjlmroq6hbm
XAI in the context of Predictive Process Monitoring: Too much to Reveal
[article]
2022
arXiv
pre-print
To address this gap, we provide a framework to enable studying the effect of different PPM-related settings and ML model-related choices on characteristics and expressiveness of resulting explanations. ...
Even when employed under the same settings regarding data, preprocessing techniques, and ML models, explanations generated by multiple XAI methods differ profoundly. ...
Encoding techniques [1, 2] include static, aggregation, index-based, and last state techniques. ...
arXiv:2202.08265v1
fatcat:q6qt36prazgb3ikdtuskjid74i
Aggregating explanation methods for stable and robust explainability
[article]
2020
arXiv
pre-print
First, we investigate schemes to combine explanation methods and reduce model uncertainty to obtain a single aggregated explanation. ...
We provide evidence that the aggregation is better at identifying important features, than on individual methods. Adversarial attacks on explanations is a recent active research topic. ...
Based on this insight, we propose two ways to aggregate explanation methods, AGG-Mean and AGG-Var. ...
arXiv:1903.00519v5
fatcat:ovabgohbgfc4hizyolke5qcq4m
A Multilingual Perspective Towards the Evaluation of Attribution Methods in Natural Language Inference
[article]
2022
arXiv
pre-print
We then perform a comprehensive evaluation of attribution methods, considering different output mechanisms and aggregation methods. ...
Finally, we augment the XNLI dataset with highlight-based explanations, providing a multilingual NLI dataset with highlights, which may support future exNLP studies. ...
Acknowledgements We would like to thank Oleg Serikov for evaluating automatically extracted highlights in the Russian subset of our e-XNLI dataset and Ramazan Pala for reviews while drafting this paper ...
arXiv:2204.05428v1
fatcat:xqvmrzzbwrephpg7qnq4y6ggui
« Previous
Showing results 1 — 15 out of 241,148 results