Filters








76 Hits in 8.0 sec

Induction of Non-Monotonic Logic Programs to Explain Boosted Tree Models Using LIME [article]

Farhad Shakerin, Gopal Gupta
2018 arXiv   pre-print
Then, in order to explain the model's global behavior, we propose the LIME-FOLD algorithm ---a heuristic-based inductive logic programming (ILP) algorithm capable of learning non-monotonic logic programs  ...  We present a heuristic based algorithm to induce nonmonotonic logic programs that will explain the behavior of XGBoost trained classifiers.  ...  We would like to cordially thank Dr. Gautam Das for bringing LIME to our attention.  ... 
arXiv:1808.00629v2 fatcat:qivoa474arh4fa2qehfm3iymti

Induction of Non-Monotonic Logic Programs to Explain Boosted Tree Models Using LIME

Farhad Shakerin, Gopal Gupta
2019 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Then, in order to explain the model's global behavior, we propose the LIME-FOLD algorithm —a heuristic-based inductive logic programming (ILP) algorithm capable of learning nonmonotonic logic programs—that  ...  We present a heuristic based algorithm to induce nonmonotonic logic programs that will explain the behavior of XGBoost trained classifiers.  ...  We would like to cordially thank Dr. Gautam Das for bringing LIME to our attention.  ... 
doi:10.1609/aaai.v33i01.33013052 fatcat:kjb7pq7ycbblxa5q2jrujtiqcu

A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts [article]

Gesina Schwalbe, Bettina Finzel
2021 arXiv   pre-print
With the amount of XAI methods vastly growing, a taxonomy of methods is needed by researchers as well as practitioners: To grasp the breadth of the topic, compare methods, and to select the right XAI method  ...  In the meantime, a wide variety of terminologies, motivations, approaches and evaluation criteria have been developed within the research field of explainable artificial intelligence (XAI).  ...  For example [74] examined the comprehensibility of programs learned with Inductive Logic Programming and [92] showed that comprehensibility of such programs could help laymen to understand how and  ... 
arXiv:2105.07190v2 fatcat:4c4zjb6hbbdbxiiw43qtvr3qhq

A Survey on the Explainability of Supervised Machine Learning [article]

Nadia Burkart, Marco F. Huber
2020 arXiv   pre-print
We conduct a state-of-the-art survey that reviews past and recent explainable SML approaches and classifies them according to the introduced definitions.  ...  This survey paper provides essential definitions, an overview of the different principles and methodologies of explainable Supervised Machine Learning (SML).  ...  It is part of first order logic (predicate logic) and able to define formal descriptions of logical contexts.  ... 
arXiv:2011.07876v1 fatcat:ccquewit2jam3livk77l5ojnqq

A Survey on the Explainability of Supervised Machine Learning

Nadia Burkart, Marco F. Huber
2021 The Journal of Artificial Intelligence Research  
We conduct a state-of-the-art survey that reviews past and recent explainable SML approaches and classifies them according to the introduced definitions.  ...  This survey paper provides essential definitions, an overview of the different principles and methodologies of explainable Supervised Machine Learning (SML).  ...  We would like to thank our student assistants (Maximilian Franz, Felix Rittmann, Jonas Steinhäuser and Jasmin Kling) who supported us during our research.  ... 
doi:10.1613/jair.1.12228 fatcat:nd3hfatjknhexb5eabklk657ey

Generating Explainable Rule Sets from Tree-Ensemble Learning Methods by Answer Set Programming

Akihiro Takemura, Katsumi Inoue
2021 Electronic Proceedings in Theoretical Computer Science  
We propose a method for generating explainable rule sets from tree-ensemble learners using Answer Set Programming (ASP).  ...  To this end, we adopt a decompositional approach where the split structures of the base decision trees are exploited in the construction of rules, which in turn are assessed using pattern mining methods  ...  Answer Set Programming Answer Set Programming [22] has its roots in logic programming and non-monotonic reasoning.  ... 
doi:10.4204/eptcs.345.26 fatcat:wzm5sqr3ynbdnixhdcegt7lc2y

Interpretability and Explainability: A Machine Learning Zoo Mini-tour [article]

Ričards Marcinkevičs, Julia E. Vogt
2020 arXiv   pre-print
In this review, we examine the problem of designing interpretable and explainable machine learning models.  ...  In this review, we emphasise the divide between interpretability and explainability and illustrate these two different research directions with concrete examples of the state-of-the-art.  ...  While single if-then rules are indeed comprehensible, inductive logic programming [25] , for instance, yields an unordered set of conjunctive rules; on the other hand, decision trees [26] are not monotonic  ... 
arXiv:2012.01805v1 fatcat:rges764sdnchtb32fkaa44tsdq

Classification of Explainable Artificial Intelligence Methods through Their Output Formats

Giulia Vilone, Luca Longo
2021 Machine Learning and Knowledge Extraction  
Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences.  ...  Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret.  ...  Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/make3030032 fatcat:2mf4wusxdnanthfdayt7iejgl4

Interpretable machine learning: Fundamental principles and 10 grand challenges

Cynthia Rudin, Chaofan Chen, Zhi Chen, Haiyang Huang, Lesia Semenova, Chudi Zhong
2022 Statistics Survey  
These problems are: (1) Optimizing sparse logical models such as decision trees; (2) Optimization of scoring systems; (3) Placing constraints into generalized additive models to encourage sparsity and  ...  ; (9) Characterization of the "Rashomon set" of good models; and (10) Interpretable reinforcement learning.  ...  Acknowledgments We thank Leonardo Lucio Custode for pointing out several useful references to Challenge 10. Thank you to David Page for providing useful references on early explainable ML.  ... 
doi:10.1214/21-ss133 fatcat:ahzfoilhmfa2rd4hcauvsn3eyy

Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges [article]

Cynthia Rudin, Chaofan Chen, Zhi Chen, Haiyang Huang, Lesia Semenova, Chudi Zhong
2021 arXiv   pre-print
These problems are: (1) Optimizing sparse logical models such as decision trees; (2) Optimization of scoring systems; (3) Placing constraints into generalized additive models to encourage sparsity and  ...  ; (9) Characterization of the "Rashomon set" of good models; and (10) Interpretable reinforcement learning.  ...  Acknowledgments We thank Leonardo Lucio Custode for pointing out several useful references to Challenge 10. Thank you to David Page for providing useful references on early explainable ML.  ... 
arXiv:2103.11251v2 fatcat:52llnswt3ze5rl3zhbai5bscce

Machine Learning in Python: Main Developments and Technology Trends in Data Science, Machine Learning, and Artificial Intelligence

Sebastian Raschka, Joshua Patterson, Corey Nolet
2020 Information  
Python continues to be the most preferred language for scientific computing, data science, and machine learning, boosting both performance and productivity by enabling the use of low-level libraries and  ...  At the core of this revolution lies the tools and the methods that are driving it, from processing the massive piles of data generated each day to learning from and taking useful action.  ...  Spatial relations between the superpixels can be extracted from inductive logic programming systems like Aleph in order to build a set of simple logical expressions that verbally explain predictions [  ... 
doi:10.3390/info11040193 fatcat:hetp7ngcpbbcpkhdcyowuiiwxe

Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond

Guang Yang, Qinghao Ye, Jun Xia
2021 Information Fusion  
This research field inspects the measures and models involved in decision-making and seeks solutions to explain them explicitly.  ...  Consequently, our confidence in AI systems can be hindered by the lack of explainability in these black-box models.  ...  Similar to LIME, the implemented system trained a local surrogate model to mimic the black-box behaviour with a rule-based explanation, which can then be mined using a multi-label decision tree.  ... 
doi:10.1016/j.inffus.2021.07.016 pmid:34980946 pmcid:PMC8459787 fatcat:3rmzvn72dbgglcddgolce2xsfe

Explainable Artificial Intelligence: a Systematic Review [article]

Giulia Vilone, Luca Longo
2020 arXiv   pre-print
This is due to the widespread application of machine learning, particularly deep learning, that has led to the development of highly accurate models but lack explainability and interpretability.  ...  A plethora of methods to tackle this problem have been proposed, developed and tested.  ...  Tree Space Prototypes (TSP) [298] selects prototypes from a training dataset to explain the prediction made be ensembles of DTs and gradient boosted tree models on a new observation.  ... 
arXiv:2006.00093v4 fatcat:dr26wgxvqrg7diljklhmdjkj7i

Machine Learning in Python: Main developments and technology trends in data science, machine learning, and artificial intelligence [article]

Sebastian Raschka, Joshua Patterson, Corey Nolet
2020 arXiv   pre-print
Python continues to be the most preferred language for scientific computing, data science, and machine learning, boosting both performance and productivity by enabling the use of low-level libraries and  ...  At the core of this revolution lies the tools and the methods that are driving it, from processing the massive piles of data generated each day to learning from and taking useful action.  ...  Acknowledgments: We would like to thank John Zedlewski, Dante Gama Dessavre, and Thejaswi Nanditale from the RAPIDS team at NVIDIA and Scott Sievert for helpful feedback on the manuscript.  ... 
arXiv:2002.04803v2 fatcat:lvbczmz7xvbyjhs65zubwluzb4

Propositionalization and Embeddings: Two Sides of the Same Coin [article]

Nada Lavrač and BlažŠkrlj and Marko Robnik-Šikonja
2020 arXiv   pre-print
While both approaches aim at transforming data into tabular data format, they use different terminology and task definitions, are perceived to address different goals, and are used in different contexts  ...  This paper contributes a unifying framework that allows for improved understanding of these two data transformation techniques by presenting their unified definitions, and by explaining the similarities  ...  Further, we are grateful to Vid Podpečan and Nika Eržen for their help with the implementation of the new version of the PyRDM library.  ... 
arXiv:2006.04410v1 fatcat:idpgnam52jdnbbpv32qhm7o3im
« Previous Showing results 1 — 15 out of 76 results