Filters








911 Hits in 5.2 sec

Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals [article]

Sainyam Galhotra, Romila Pradhan, Babak Salimi
<span title="2021-06-23">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper, we propose a principled causality-based approach for explaining black-box decision-making systems that addresses limitations of existing methods in XAI.  ...  At the core of our framework lies probabilistic contrastive counterfactuals, a concept that can be traced back to philosophical, cognitive, and social foundations of theories on how humans generate and  ...  This paper proposes a principled approach for explaining black-box decision-making systems using probabilistic contrastive counterfactuals. Key contributions include: 1.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.11972v2">arXiv:2103.11972v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vh6f4f2kvfezrf6i26a2cflqpa">fatcat:vh6f4f2kvfezrf6i26a2cflqpa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210625144151/https://arxiv.org/pdf/2103.11972v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/45/7e/457e3f9df50883f2c94af0332f0e8672ce729ac9.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.11972v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Explaining Reject Options of Learning Vector Quantization Classifiers [article]

André Artelt, Johannes Brinkrolf, Roel Visser, Barbara Hammer
<span title="2022-02-15">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We propose to use counterfactual explanations for explaining rejects and investigate how to efficiently compute counterfactual explanations of different reject options for an important class of models,  ...  With the ongoing rise of eXplainable AI, a lot of methods for explaining model predictions have been developed.  ...  option Eq. ( 8 ), we report the results of using Algorithm 2 (the results for the "true" black-box solver can be found in Appendix B); BbCf -Counterfactuals computed by the black-box solver, TrainCf  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2202.07244v1">arXiv:2202.07244v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/p4dsu2qvanh5baccbbvs5bwuke">fatcat:p4dsu2qvanh5baccbbvs5bwuke</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220222182436/https://arxiv.org/pdf/2202.07244v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/5f/ba/5fbaff58d818145cc2fb2a3600bd5b79e1c86ae0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2202.07244v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

LIMEtree: Interactively Customisable Explanations Based on Local Surrogate Multi-output Regression Trees [article]

Kacper Sokol, Peter Flach
<span title="2020-05-04">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this work we introduce a model-agnostic and post-hoc local explainability technique for black-box predictions called LIMEtree, which employs surrogate multi-output regression trees.  ...  Our method comes with local fidelity guarantees and can produce a range of diverse explanation types, including contrastive and counterfactual explanations praised in the literature.  ...  Acknowledgements The authors would like to thank Alexander Hepburn and Raul Santos-Rodriguez for insightful discussions and their help with developing the code used for the experiments.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.01427v1">arXiv:2005.01427v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xyvqoy5fivh7darsmbo54x5tdu">fatcat:xyvqoy5fivh7darsmbo54x5tdu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200507044208/https://arxiv.org/pdf/2005.01427v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.01427v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications [article]

Yu-Liang Chou and Catarina Moreira and Peter Bruza and Chun Ouyang and Joaquim Jorge
<span title="2021-06-08">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
A specific class of algorithms that have the potential to provide causability are counterfactuals.  ...  This research suggests that current model-agnostic counterfactual algorithms for explainable AI are not grounded on a causal theoretical formalism and, consequently, cannot promote causability to a human  ...  the black boxes [123?  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.04244v2">arXiv:2103.04244v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/uqs3y7v7hrhtxkh2ltl4wluyqe">fatcat:uqs3y7v7hrhtxkh2ltl4wluyqe</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210328131933/https://arxiv.org/pdf/2103.04244v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/60/b5/60b5bfdfe7b83baf1c7040fa7588bb40451ac49c.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.04244v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

An Interpretable Probabilistic Approach for Demystifying Black-box Predictive Models [article]

Catarina Moreira and Yu-Liang Chou and Mythreyi Velmurugan and Chun Ouyang and Renuka Sindhgatta and Peter Bruza
<span title="2020-07-21">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The use of sophisticated machine learning models for critical decision making is faced with a challenge that these models are often applied as a "black-box".  ...  The framework supports extracting a Bayesian network as an approximation of the black-box model for a specific prediction.  ...  Although counterfactual explanation are useful, they do not explain why a certain prediction is made.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.10668v1">arXiv:2007.10668v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/j2i4qvqqnvhlfavroi5s5kvm3a">fatcat:j2i4qvqqnvhlfavroi5s5kvm3a</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200907062552/https://arxiv.org/pdf/2007.10668v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/9c/7c/9c7cb095405342b2578a62267b2a5f8b3197547e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.10668v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Conversational Explanations of Machine Learning Predictions Through Class-contrastive Counterfactual Statements

Kacper Sokol, Peter Flach
<span title="">2018</span> <i title="International Joint Conferences on Artificial Intelligence Organization"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/vfwwmrihanevtjbbkti2kc3nke" style="color: black;">Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence</a> </i> &nbsp;
Therefore, in our research we address interpretability and explainability of predictions made by machine learning models.  ...  Our work draws heavily on human explanation research in social sciences: contrastive and exemplar explanations provided through a dialogue.  ...  Finally, we will design and implement algorithms to generate counterfactual explanations for geometric and probabilistic models and investigate explainability and interpretability of other components of  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.24963/ijcai.2018/836">doi:10.24963/ijcai.2018/836</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/ijcai/SokolF18.html">dblp:conf/ijcai/SokolF18</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/v7zzeya36jcmlmpeidk5lo6feq">fatcat:v7zzeya36jcmlmpeidk5lo6feq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190223080741/http://pdfs.semanticscholar.org/4431/16f95615b0e6ca489610fcc1bc5dddda3bcf.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/44/31/443116f95615b0e6ca489610fcc1bc5dddda3bcf.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.24963/ijcai.2018/836"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

SA 2019 Placeholder Page

<span title="">2019</span> <i title="IEEE"> 2019 First International Conference on Societal Automation (SA) </i> &nbsp;
In our previous work we developed causal rule-mining algorithm that provided contrastive explanations via rule-based notation.  ...  We work on extending this work to exploit strengths of process mining and Bayesian networks to better assure counterfactual explanations.  ...  Finally, there are areas such as high-risk industry or health care, where black-box mechanisms are not yet well established as trustworthy algorithms.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/sa47457.2019.8938075">doi:10.1109/sa47457.2019.8938075</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/m5ixqeuzyjatjif244yqakayma">fatcat:m5ixqeuzyjatjif244yqakayma</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210429001227/https://ieeexplore.ieee.org/ielx7/8933294/8938028/08938075.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/3f/2e/3f2e14903d388334bb5eb7c24a2aee022834dacf.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/sa47457.2019.8938075"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Counterfactual Explanation of Machine Learning Survival Models

Maxim Kovalev, Lev Utkin, Frank Coolen, Andrei Konstantinov
<span title="">2021</span> <i title="Vilnius University Press"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2euqyspe25eahoco4ofscpbnai" style="color: black;">Informatica</a> </i> &nbsp;
It is shown that the counterfactual explanation problem can be reduced to a standard convex optimization problem with linear constraints when the explained black-box model is the Cox model.  ...  For other black-box models, it is proposed to apply the wellknown Particle Swarm Optimization algorithm. Numerical experiments with real and synthetic data demonstrate the proposed method.  ...  In contrast to these black-box models, there are many survival models which are not black boxes, i.e. they are self-explainable and do not need to be explained.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.15388/21-infor468">doi:10.15388/21-infor468</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/b4ibqxz4ufeq3byubl3w5hqin4">fatcat:b4ibqxz4ufeq3byubl3w5hqin4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211212031510/https://informatica.vu.lt/journal/INFORMATICA/article/1238/file/pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e4/63/e463889cf52714da673206f23fa0baf0f9c58d65.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.15388/21-infor468"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>

A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence

Ilia Stepin, Jose M. Alonso, Alejandro Catala, Martin Pereira-Farina
<span title="">2021</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q7qi7j4ckfac7ehf3mjbso4hne" style="color: black;">IEEE Access</a> </i> &nbsp;
INDEX TERMS Computational intelligence, contrastive explanations, counterfactuals, explainable artificial intelligence, systematic literature review.  ...  Alternatively, contrastive and counterfactual explanations justify why the output of the algorithms is not any different and how it could be changed, respectively.  ...  , including ML-based black-box algorithms.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2021.3051315">doi:10.1109/access.2021.3051315</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/3zupk4jfdncuvj5osdkk7rykdm">fatcat:3zupk4jfdncuvj5osdkk7rykdm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210128183610/https://ieeexplore.ieee.org/ielx7/6287639/9312710/09321372.pdf?tp=&amp;arnumber=9321372&amp;isnumber=9312710&amp;ref=" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e0/16/e016ee7bfc73cb5b8a92f6c517389be837c035eb.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2021.3051315"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> ieee.com </button> </a>

CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms [article]

Martin Pawelczyk and Sascha Bielawski and Johannes van den Heuvel and Tobias Richter and Gjergji Kasneci
<span title="2021-08-02">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Going forward - to guarantee meaningful comparisons across explanation methods - we present CARLA (Counterfactual And Recourse LibrAry), a python library for benchmarking counterfactual explanation methods  ...  In summary, our work provides the following contributions: (i) an extensive benchmark of 11 popular counterfactual explanation methods, (ii) a benchmarking framework for research on future counterfactual  ...  The authors of [48] provide FACE, which uses a shortest path algorithm on graphs to find counterfactual explanations. In contrast, Kanamori et al.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2108.00783v1">arXiv:2108.00783v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/pfogjxop2fhyhn4tpmrkbzzlma">fatcat:pfogjxop2fhyhn4tpmrkbzzlma</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210804145506/https://arxiv.org/pdf/2108.00783v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/85/25/8525736d680d631670edad6c02cf7b14bd698431.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2108.00783v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Survey on the Explainability of Supervised Machine Learning [article]

Nadia Burkart, Marco F. Huber
<span title="2020-11-16">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The decision-making behind the black boxes requires it to be more transparent, accountable, and understandable for humans.  ...  ., artificial neural networks have a high accuracy but humans often perceive the models as black boxes. Insights about the decision making are mostly opaque for humans.  ...  ., 2008 ) is a sequential covering algorithm that uses iterative growing to extract decision rules from a black box.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2011.07876v1">arXiv:2011.07876v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ccquewit2jam3livk77l5ojnqq">fatcat:ccquewit2jam3livk77l5ojnqq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201118010221/https://arxiv.org/pdf/2011.07876v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/5f/ca/5fca8bbec714e403fa0f95a56b355c8ca835bcc0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2011.07876v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Survey on the Explainability of Supervised Machine Learning

Nadia Burkart, Marco F. Huber
<span title="2021-01-19">2021</span> <i title="AI Access Foundation"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/4ax4efcwajcgvidb6hcg6mwx4a" style="color: black;">The Journal of Artificial Intelligence Research</a> </i> &nbsp;
The decision-making behind the black boxes requires it to be more transparent, accountable, and understandable for humans.  ...  ., artificial neural networks have a high accuracy but humans often perceive the models as black boxes. Insights about the decision making are mostly opaque for humans.  ...  We would like to thank our student assistants (Maximilian Franz, Felix Rittmann, Jonas Steinhäuser and Jasmin Kling) who supported us during our research.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1613/jair.1.12228">doi:10.1613/jair.1.12228</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/nd3hfatjknhexb5eabklk657ey">fatcat:nd3hfatjknhexb5eabklk657ey</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210204071447/https://jair.org/index.php/jair/article/download/12228/26647" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f9/1a/f91a18c266d1eafbff6a376145a49f8ba2763091.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1613/jair.1.12228"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>

XPROAX-Local explanations for text classification with progressive neighborhood approximation [article]

Yi Cai, Arthur Zimek, Eirini Ntoutsi
<span title="2021-09-30">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The importance of the neighborhood for training a local surrogate model to approximate the local decision boundary of a black box classifier has been already highlighted in the literature.  ...  To overcome this problem, we propose a progressive approximation of the neighborhood using counterfactual instances as initial landmarks and a careful 2-stage sampling approach to refine counterfactuals  ...  Black box models: As black box models, we used a Random Forest (RF) and a Deep Neural Network (DNN).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2109.15004v1">arXiv:2109.15004v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ea7wl2pk2zegbkdqrbac3htu3e">fatcat:ea7wl2pk2zegbkdqrbac3htu3e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211002163613/https://arxiv.org/pdf/2109.15004v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/9f/49/9f49a314bd0c19a90bbfbde37b1cdf2160de0475.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2109.15004v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Minun

Jin Wang, Yuliang Li
<span title="2022-06-12">2022</span> <i title="ACM"> Proceedings of the Sixth Workshop on Data Management for End-To-End Machine Learning </i> &nbsp;
To address this issue, recent studies extended explainable AI techniques to explain black-box EM models.  ...  We utilize counterfactual examples generated from an EM customized search space as the explanations and develop two search algorithms to efficiently find such results.  ...  Thus, it is rather challenging to apply them in explaining black-box EM models.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/3533028.3533304">doi:10.1145/3533028.3533304</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vloghw7ajbco3hath76mzv5xuq">fatcat:vloghw7ajbco3hath76mzv5xuq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220614223320/https://dl.acm.org/doi/pdf/10.1145/3533028.3533304" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/96/3d/963d82cbb320507e92d889bcf0293a065b7ce94f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1145/3533028.3533304"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> acm.org </button> </a>

On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning [article]

Eoin M. Kenny, Mark T. Keane
<span title="2020-09-10">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This paper advances a novel method for generating plausible counterfactuals (and semifactuals) for black box CNN classifiers doing computer vision.  ...  In contrast however, semifactuals, which are a similar way humans commonly explain their reasoning, have surprisingly received no attention.  ...  Computationally, counterfactuals provide explanations without having to "open the black box" (Grath et al. 2018) .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.06399v1">arXiv:2009.06399v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2bmz34g2hbagfoampy7oye2ipe">fatcat:2bmz34g2hbagfoampy7oye2ipe</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201004113012/https://arxiv.org/pdf/2009.06399v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.06399v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 911 results